2024-01-02 10:55:27,346 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10 2024-01-02 10:55:27,368 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL timeout: 13 mins 2024-01-02 10:55:27,383 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/cluster_5195cdae-b675-1741-5385-06d6a585e5bd, deleteOnExit=true 2024-01-02 10:55:27,384 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/test.cache.data in system properties and HBase conf 2024-01-02 10:55:27,384 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/hadoop.tmp.dir in system properties and HBase conf 2024-01-02 10:55:27,385 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/hadoop.log.dir in system properties and HBase conf 2024-01-02 10:55:27,386 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/mapreduce.cluster.local.dir in system properties and HBase conf 2024-01-02 10:55:27,386 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/mapreduce.cluster.temp.dir in system properties and HBase conf 2024-01-02 10:55:27,387 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2024-01-02 10:55:27,518 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2024-01-02 10:55:27,962 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2024-01-02 10:55:27,968 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2024-01-02 10:55:27,969 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2024-01-02 10:55:27,969 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/yarn.nodemanager.log-dirs in system properties and HBase conf 2024-01-02 10:55:27,970 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2024-01-02 10:55:27,970 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2024-01-02 10:55:27,971 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2024-01-02 10:55:27,971 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2024-01-02 10:55:27,971 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/dfs.journalnode.edits.dir in system properties and HBase conf 2024-01-02 10:55:27,972 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2024-01-02 10:55:27,972 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/nfs.dump.dir in system properties and HBase conf 2024-01-02 10:55:27,973 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/java.io.tmpdir in system properties and HBase conf 2024-01-02 10:55:27,973 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/dfs.journalnode.edits.dir in system properties and HBase conf 2024-01-02 10:55:27,974 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2024-01-02 10:55:27,974 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2024-01-02 10:55:28,558 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2024-01-02 10:55:28,563 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2024-01-02 10:55:28,879 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2024-01-02 10:55:29,068 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2024-01-02 10:55:29,088 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2024-01-02 10:55:29,128 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2024-01-02 10:55:29,164 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/java.io.tmpdir/Jetty_localhost_40119_hdfs____qojlc7/webapp 2024-01-02 10:55:29,311 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40119 2024-01-02 10:55:29,321 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2024-01-02 10:55:29,321 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2024-01-02 10:55:29,835 WARN [Listener at localhost/43439] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2024-01-02 10:55:29,921 WARN [Listener at localhost/43439] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2024-01-02 10:55:29,942 WARN [Listener at localhost/43439] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2024-01-02 10:55:29,949 INFO [Listener at localhost/43439] log.Slf4jLog(67): jetty-6.1.26 2024-01-02 10:55:29,955 INFO [Listener at localhost/43439] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/java.io.tmpdir/Jetty_localhost_44997_datanode____nic16m/webapp 2024-01-02 10:55:30,069 INFO [Listener at localhost/43439] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44997 2024-01-02 10:55:30,457 WARN [Listener at localhost/43067] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2024-01-02 10:55:30,481 WARN [Listener at localhost/43067] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2024-01-02 10:55:30,486 WARN [Listener at localhost/43067] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2024-01-02 10:55:30,489 INFO [Listener at localhost/43067] log.Slf4jLog(67): jetty-6.1.26 2024-01-02 10:55:30,496 INFO [Listener at localhost/43067] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/java.io.tmpdir/Jetty_localhost_41627_datanode____wkiipx/webapp 2024-01-02 10:55:30,640 INFO [Listener at localhost/43067] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:41627 2024-01-02 10:55:30,659 WARN [Listener at localhost/35143] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2024-01-02 10:55:30,676 WARN [Listener at localhost/35143] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2024-01-02 10:55:30,680 WARN [Listener at localhost/35143] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2024-01-02 10:55:30,682 INFO [Listener at localhost/35143] log.Slf4jLog(67): jetty-6.1.26 2024-01-02 10:55:30,689 INFO [Listener at localhost/35143] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/java.io.tmpdir/Jetty_localhost_36603_datanode____43q4pp/webapp 2024-01-02 10:55:30,810 INFO [Listener at localhost/35143] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:36603 2024-01-02 10:55:30,833 WARN [Listener at localhost/42899] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2024-01-02 10:55:31,102 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa7830688ca311cb9: Processing first storage report for DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d from datanode ab0f122c-aa18-46bb-8258-7abccd88bb31 2024-01-02 10:55:31,104 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa7830688ca311cb9: from storage DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d node DatanodeRegistration(127.0.0.1:44277, datanodeUuid=ab0f122c-aa18-46bb-8258-7abccd88bb31, infoPort=35339, infoSecurePort=0, ipcPort=35143, storageInfo=lv=-57;cid=testClusterID;nsid=1331944918;c=1704192928642), blocks: 0, hasStaleStorage: true, processing time: 1 msecs, invalidatedBlocks: 0 2024-01-02 10:55:31,104 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xefa186cac3cdc55c: Processing first storage report for DS-1ec67827-86f0-4d42-b3b5-887ba7f05758 from datanode 9f52681b-e55e-48e4-a1d5-28659187cd89 2024-01-02 10:55:31,104 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xefa186cac3cdc55c: from storage DS-1ec67827-86f0-4d42-b3b5-887ba7f05758 node DatanodeRegistration(127.0.0.1:36671, datanodeUuid=9f52681b-e55e-48e4-a1d5-28659187cd89, infoPort=36869, infoSecurePort=0, ipcPort=43067, storageInfo=lv=-57;cid=testClusterID;nsid=1331944918;c=1704192928642), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2024-01-02 10:55:31,105 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xa7830688ca311cb9: Processing first storage report for DS-b6070415-4f57-4f14-8c2c-e56eeefdf64d from datanode ab0f122c-aa18-46bb-8258-7abccd88bb31 2024-01-02 10:55:31,105 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xa7830688ca311cb9: from storage DS-b6070415-4f57-4f14-8c2c-e56eeefdf64d node DatanodeRegistration(127.0.0.1:44277, datanodeUuid=ab0f122c-aa18-46bb-8258-7abccd88bb31, infoPort=35339, infoSecurePort=0, ipcPort=35143, storageInfo=lv=-57;cid=testClusterID;nsid=1331944918;c=1704192928642), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2024-01-02 10:55:31,105 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xefa186cac3cdc55c: Processing first storage report for DS-7f29521e-8667-4c9c-a129-3f67cab5fc4a from datanode 9f52681b-e55e-48e4-a1d5-28659187cd89 2024-01-02 10:55:31,105 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xefa186cac3cdc55c: from storage DS-7f29521e-8667-4c9c-a129-3f67cab5fc4a node DatanodeRegistration(127.0.0.1:36671, datanodeUuid=9f52681b-e55e-48e4-a1d5-28659187cd89, infoPort=36869, infoSecurePort=0, ipcPort=43067, storageInfo=lv=-57;cid=testClusterID;nsid=1331944918;c=1704192928642), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2024-01-02 10:55:31,105 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb7a48e57887ce90d: Processing first storage report for DS-50584779-8bc8-44b8-b8b3-73eb5f90a869 from datanode b410da91-94f8-458a-853a-df29a3aede77 2024-01-02 10:55:31,105 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb7a48e57887ce90d: from storage DS-50584779-8bc8-44b8-b8b3-73eb5f90a869 node DatanodeRegistration(127.0.0.1:43327, datanodeUuid=b410da91-94f8-458a-853a-df29a3aede77, infoPort=34863, infoSecurePort=0, ipcPort=42899, storageInfo=lv=-57;cid=testClusterID;nsid=1331944918;c=1704192928642), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2024-01-02 10:55:31,106 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xb7a48e57887ce90d: Processing first storage report for DS-04bef2d2-9bc1-4680-ae30-901f1f697776 from datanode b410da91-94f8-458a-853a-df29a3aede77 2024-01-02 10:55:31,106 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xb7a48e57887ce90d: from storage DS-04bef2d2-9bc1-4680-ae30-901f1f697776 node DatanodeRegistration(127.0.0.1:43327, datanodeUuid=b410da91-94f8-458a-853a-df29a3aede77, infoPort=34863, infoSecurePort=0, ipcPort=42899, storageInfo=lv=-57;cid=testClusterID;nsid=1331944918;c=1704192928642), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2024-01-02 10:55:31,330 DEBUG [Listener at localhost/42899] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10 2024-01-02 10:55:31,349 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testLogrollWhileStreaming Thread=136, OpenFileDescriptor=429, MaxFileDescriptor=60000, SystemLoadAverage=213, ProcessCount=167, AvailableMemoryMB=5121 2024-01-02 10:55:31,353 DEBUG [Listener at localhost/42899] util.ClassSize(228): Using Unsafe to estimate memory layout 2024-01-02 10:55:32,000 INFO [Listener at localhost/42899] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2024-01-02 10:55:32,032 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:32,210 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testLogrollWhileStreaming, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testLogrollWhileStreaming, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:32,283 DEBUG [Listener at localhost/42899] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2024-01-02 10:55:32,283 DEBUG [Listener at localhost/42899] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2024-01-02 10:55:32,286 DEBUG [Listener at localhost/42899] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:53) at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:190) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:116) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:719) at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:128) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:881) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:574) at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.init(AbstractFSWAL.java:515) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:160) at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:295) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStreamTestBase.initWAL(WALEntryStreamTestBase.java:151) at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream.setUp(TestBasicWALEntryStream.java:80) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) 2024-01-02 10:55:32,441 DEBUG [AsyncFSWAL-1-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:417) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:300) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:335) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2024-01-02 10:55:32,451 DEBUG [AsyncFSWAL-1-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:32,458 DEBUG [AsyncFSWAL-1-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:300) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:335) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2024-01-02 10:55:32,501 DEBUG [AsyncFSWAL-1-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:32,502 DEBUG [AsyncFSWAL-1-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:32,652 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testLogrollWhileStreaming/testLogrollWhileStreaming.1704192932225 2024-01-02 10:55:32,653 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK]] 2024-01-02 10:55:32,914 INFO [AsyncFSWAL-0-hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10-prefix:testLogrollWhileStreaming] wal.AbstractFSWAL(1141): Slow sync cost: 145 ms, current pipeline: [DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK]] 2024-01-02 10:55:32,938 INFO [Listener at localhost/42899] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2024-01-02 10:55:33,103 DEBUG [AsyncFSWAL-1-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:33,105 DEBUG [AsyncFSWAL-1-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:33,105 DEBUG [AsyncFSWAL-1-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:33,118 INFO [Listener at localhost/42899] wal.AbstractFSWAL(802): Rolled WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testLogrollWhileStreaming/testLogrollWhileStreaming.1704192932225 with entries=3, filesize=437 B; new WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testLogrollWhileStreaming/testLogrollWhileStreaming.1704192933083 2024-01-02 10:55:33,120 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:33,120 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(716): hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testLogrollWhileStreaming/testLogrollWhileStreaming.1704192932225 is not closed yet, will try archiving it next time 2024-01-02 10:55:33,124 INFO [Listener at localhost/42899] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2024-01-02 10:55:33,126 INFO [Listener at localhost/42899] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2024-01-02 10:55:33,126 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 319 2024-01-02 10:55:33,144 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testLogrollWhileStreaming/testLogrollWhileStreaming.1704192932225 not finished, retry = 0 2024-01-02 10:55:33,146 INFO [Listener at localhost/42899] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2024-01-02 10:55:33,257 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(425): Encountered a malformed edit, seeking back to last good position in file, from 445 to 437 java.io.EOFException: Invalid PB, EOF? Ignoring; originalPosition=437, currentPosition=445 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:354) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:95) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:83) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:259) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:173) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:102) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.peek(WALEntryStream.java:111) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.next(WALEntryStream.java:118) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStreamTestBase$WALEntryStreamWithRetries.access$001(WALEntryStreamTestBase.java:82) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStreamTestBase$WALEntryStreamWithRetries.lambda$next$0(WALEntryStreamTestBase.java:95) at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:183) at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:134) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStreamTestBase$WALEntryStreamWithRetries.next(WALEntryStreamTestBase.java:94) at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream.testLogrollWhileStreaming(TestBasicWALEntryStream.java:157) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: encoded_region_name, table_name, log_sequence_number, write_time at org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:232) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:237) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.parseDelimitedFrom(ProtobufUtil.java:3578) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:348) ... 40 more 2024-01-02 10:55:33,265 DEBUG [Listener at localhost/42899] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testLogrollWhileStreaming/testLogrollWhileStreaming.1704192932225 2024-01-02 10:55:33,275 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 201 2024-01-02 10:55:33,309 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:33,309 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testLogrollWhileStreaming:(num 1704192933083) 2024-01-02 10:55:33,320 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testLogrollWhileStreaming Thread=142 (was 136) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:46232 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.PeerCache@2dccfb16 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253) org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46) org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33802 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:43439 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-1-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HBase-Metrics2-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=454 (was 429) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=213 (was 213), ProcessCount=167 (was 167), AvailableMemoryMB=4954 (was 5121) 2024-01-02 10:55:33,330 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testNewEntriesWhileStreaming Thread=142, OpenFileDescriptor=454, MaxFileDescriptor=60000, SystemLoadAverage=213, ProcessCount=167, AvailableMemoryMB=4953 2024-01-02 10:55:33,333 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:33,341 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testNewEntriesWhileStreaming, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testNewEntriesWhileStreaming, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:33,364 DEBUG [AsyncFSWAL-3-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:33,368 DEBUG [AsyncFSWAL-3-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:33,368 DEBUG [AsyncFSWAL-3-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:33,377 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testNewEntriesWhileStreaming/testNewEntriesWhileStreaming.1704192933342 2024-01-02 10:55:33,379 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK]] 2024-01-02 10:55:33,402 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 201 2024-01-02 10:55:33,414 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 437 2024-01-02 10:55:33,430 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:33,431 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testNewEntriesWhileStreaming:(num 1704192933342) 2024-01-02 10:55:33,444 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testNewEntriesWhileStreaming Thread=144 (was 142) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33802 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:36812 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-3-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=459 (was 454) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=213 (was 213), ProcessCount=167 (was 167), AvailableMemoryMB=4952 (was 4953) 2024-01-02 10:55:33,459 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReplicationSourceWALReaderWithPartialWALEntryFailingFilter Thread=144, OpenFileDescriptor=459, MaxFileDescriptor=60000, SystemLoadAverage=213, ProcessCount=167, AvailableMemoryMB=4951 2024-01-02 10:55:33,460 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:33,466 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testReplicationSourceWALReaderWithPartialWALEntryFailingFilter, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWithPartialWALEntryFailingFilter, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:33,488 DEBUG [AsyncFSWAL-4-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:33,490 DEBUG [AsyncFSWAL-4-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:33,491 DEBUG [AsyncFSWAL-4-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:33,495 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWithPartialWALEntryFailingFilter/testReplicationSourceWALReaderWithPartialWALEntryFailingFilter.1704192933467 2024-01-02 10:55:33,498 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:33,678 INFO [Listener at localhost/42899] regionserver.ReplicationSourceWALReader(119): peerClusterZnode=null, ReplicationSourceWALReaderThread : null inited, replicationBatchSizeCapacity=67108864, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2024-01-02 10:55:33,697 WARN [Thread-177] regionserver.ReplicationSourceWALReader(177): Failed to read stream of replication entries org.apache.hadoop.hbase.replication.regionserver.WALEntryFilterRetryableException: failing filter at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream$PartialWALEntryFailingWALEntryFilter.filter(TestBasicWALEntryStream.java:840) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.filterEntry(ReplicationSourceWALReader.java:361) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:224) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:157) 2024-01-02 10:55:33,733 WARN [Thread-177] regionserver.ReplicationSourceWALReader(177): Failed to read stream of replication entries org.apache.hadoop.hbase.replication.regionserver.WALEntryFilterRetryableException: failing filter at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream$PartialWALEntryFailingWALEntryFilter.filter(TestBasicWALEntryStream.java:840) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.filterEntry(ReplicationSourceWALReader.java:361) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:224) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:157) 2024-01-02 10:55:33,776 WARN [Thread-177] regionserver.ReplicationSourceWALReader(177): Failed to read stream of replication entries org.apache.hadoop.hbase.replication.regionserver.WALEntryFilterRetryableException: failing filter at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream$PartialWALEntryFailingWALEntryFilter.filter(TestBasicWALEntryStream.java:840) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.filterEntry(ReplicationSourceWALReader.java:361) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:224) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:157) 2024-01-02 10:55:33,829 DEBUG [Thread-177] wal.ProtobufLogReader(420): EOF at position 458 2024-01-02 10:55:33,830 DEBUG [Thread-177] regionserver.ReplicationSourceWALReader(162): Read 3 WAL entries eligible for replication 2024-01-02 10:55:33,831 DEBUG [Thread-177] wal.ProtobufLogReader(420): EOF at position 458 2024-01-02 10:55:33,861 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:33,862 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testReplicationSourceWALReaderWithPartialWALEntryFailingFilter:(num 1704192933467) 2024-01-02 10:55:33,868 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33802 [Sending block BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004]] datanode.DataXceiver(647): DatanodeRegistration(127.0.0.1:44277, datanodeUuid=ab0f122c-aa18-46bb-8258-7abccd88bb31, infoPort=35339, infoSecurePort=0, ipcPort=35143, storageInfo=lv=-57;cid=testClusterID;nsid=1331944918;c=1704192928642):Got exception while serving BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004 to /127.0.0.1:33802 java.io.FileNotFoundException: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/cluster_5195cdae-b675-1741-5385-06d6a585e5bd/dfs/data/data4/current/BP-4569960-172.31.14.131-1704192928642/current/rbw/blk_1073741828_1004.meta (No such file or directory) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.(FileInputStream.java:138) at org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream.(FileIoProvider.java:846) at org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream.(FileIoProvider.java:838) at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.getFileInputStream(FileIoProvider.java:329) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getMetaDataInputStream(FsDatasetImpl.java:272) at org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:310) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:595) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:145) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:100) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2024-01-02 10:55:33,872 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33802 [Sending block BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004]] datanode.DataXceiver(323): 127.0.0.1:44277:DataXceiver error processing READ_BLOCK operation src: /127.0.0.1:33802 dst: /127.0.0.1:44277 java.io.FileNotFoundException: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/cluster_5195cdae-b675-1741-5385-06d6a585e5bd/dfs/data/data4/current/BP-4569960-172.31.14.131-1704192928642/current/rbw/blk_1073741828_1004.meta (No such file or directory) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.(FileInputStream.java:138) at org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream.(FileIoProvider.java:846) at org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream.(FileIoProvider.java:838) at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.getFileInputStream(FileIoProvider.java:329) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getMetaDataInputStream(FsDatasetImpl.java:272) at org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:310) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:595) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:145) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:100) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2024-01-02 10:55:33,875 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReplicationSourceWALReaderWithPartialWALEntryFailingFilter Thread=146 (was 144) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:46232 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33802 [Sending block BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004] java.lang.invoke.MethodHandleNatives.resolve(Native Method) java.lang.invoke.MemberName$Factory.resolve(MemberName.java:975) java.lang.invoke.MemberName$Factory.resolveOrFail(MemberName.java:1000) java.lang.invoke.MethodHandles$Lookup.resolveOrFail(MethodHandles.java:1394) java.lang.invoke.MethodHandles$Lookup.linkMethodHandleConstant(MethodHandles.java:1750) java.lang.invoke.MethodHandleNatives.linkMethodHandleConstant(MethodHandleNatives.java:477) org.apache.commons.io.IOUtils.(IOUtils.java:183) org.apache.hadoop.hdfs.server.datanode.FileIoProvider.getFileInputStream(FileIoProvider.java:333) org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getMetaDataInputStream(FsDatasetImpl.java:272) org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:310) org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:595) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:145) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:100) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:36812 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Thread-177 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:454) org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.newBlockReader(BlockReaderRemote2.java:411) org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:864) org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:753) org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:387) org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:717) org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:665) org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:941) org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:996) org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:772) java.io.FilterInputStream.read(FilterInputStream.java:83) org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:333) org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:345) org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readHeader(ProtobufLogReader.java:194) org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initInternal(ProtobufLogReader.java:222) org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.reset(ProtobufLogReader.java:160) org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.resetReader(WALEntryStream.java:387) org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.reset(WALEntryStream.java:159) org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:149) Potentially hanging thread: AsyncFSWAL-4-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=462 (was 459) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=213 (was 213), ProcessCount=167 (was 167), AvailableMemoryMB=4942 (was 4951) 2024-01-02 10:55:33,876 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33846 [Sending block BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004]] datanode.DataXceiver(647): DatanodeRegistration(127.0.0.1:44277, datanodeUuid=ab0f122c-aa18-46bb-8258-7abccd88bb31, infoPort=35339, infoSecurePort=0, ipcPort=35143, storageInfo=lv=-57;cid=testClusterID;nsid=1331944918;c=1704192928642):Got exception while serving BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004 to /127.0.0.1:33846 org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004 at org.apache.hadoop.hdfs.server.datanode.BlockSender.getReplica(BlockSender.java:493) at org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:256) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:595) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:145) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:100) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2024-01-02 10:55:33,876 WARN [Thread-177] impl.BlockReaderFactory(768): I/O error constructing remote block reader. java.io.IOException: Got error, status=ERROR, status message opReadBlock BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004 received exception org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004, for OP_READ_BLOCK, self=/127.0.0.1:33846, remote=/127.0.0.1:44277, for file /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWithPartialWALEntryFailingFilter/testReplicationSourceWALReaderWithPartialWALEntryFailingFilter.1704192933467, for pool BP-4569960-172.31.14.131-1704192928642 block 1073741828_1004 at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:118) at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.checkSuccess(BlockReaderRemote2.java:445) at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.newBlockReader(BlockReaderRemote2.java:413) at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:864) at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:753) at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:387) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:717) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:665) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:941) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:996) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:772) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:333) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:345) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readHeader(ProtobufLogReader.java:194) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initInternal(ProtobufLogReader.java:222) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.reset(ProtobufLogReader.java:160) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.resetReader(WALEntryStream.java:387) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.reset(WALEntryStream.java:159) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:149) 2024-01-02 10:55:33,881 WARN [Thread-177] hdfs.DFSInputStream(687): Failed to connect to /127.0.0.1:44277 for block BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004, add to deadNodes and continue. java.io.IOException: Got error, status=ERROR, status message opReadBlock BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004 received exception org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004, for OP_READ_BLOCK, self=/127.0.0.1:33846, remote=/127.0.0.1:44277, for file /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWithPartialWALEntryFailingFilter/testReplicationSourceWALReaderWithPartialWALEntryFailingFilter.1704192933467, for pool BP-4569960-172.31.14.131-1704192928642 block 1073741828_1004 at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:118) at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.checkSuccess(BlockReaderRemote2.java:445) at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.newBlockReader(BlockReaderRemote2.java:413) at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:864) at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:753) at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:387) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:717) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:665) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:941) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:996) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:772) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:333) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:345) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readHeader(ProtobufLogReader.java:194) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initInternal(ProtobufLogReader.java:222) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.reset(ProtobufLogReader.java:160) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.resetReader(WALEntryStream.java:387) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.reset(WALEntryStream.java:159) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:149) 2024-01-02 10:55:33,881 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33846 [Sending block BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004]] datanode.DataXceiver(323): 127.0.0.1:44277:DataXceiver error processing READ_BLOCK operation src: /127.0.0.1:33846 dst: /127.0.0.1:44277 org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004 at org.apache.hadoop.hdfs.server.datanode.BlockSender.getReplica(BlockSender.java:493) at org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:256) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:595) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:145) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:100) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2024-01-02 10:55:33,885 DEBUG [Thread-177] wal.ProtobufLogReader(425): Encountered a malformed edit, seeking back to last good position in file, from 466 to 458 java.io.EOFException: Invalid PB, EOF? Ignoring; originalPosition=458, currentPosition=466 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:354) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:95) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:83) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:259) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:173) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:102) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.tryAdvanceStreamAndCreateWALBatch(ReplicationSourceWALReader.java:258) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: encoded_region_name, table_name, log_sequence_number, write_time at org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:232) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:237) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.parseDelimitedFrom(ProtobufUtil.java:3578) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:348) ... 7 more 2024-01-02 10:55:33,888 INFO [Thread-177] wal.AbstractFSWALProvider(464): Log hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWithPartialWALEntryFailingFilter/testReplicationSourceWALReaderWithPartialWALEntryFailingFilter.1704192933467 was moved to hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testReplicationSourceWALReaderWithPartialWALEntryFailingFilter.1704192933467 2024-01-02 10:55:33,893 WARN [DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33850 [Sending block BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004]] datanode.DataXceiver(647): DatanodeRegistration(127.0.0.1:44277, datanodeUuid=ab0f122c-aa18-46bb-8258-7abccd88bb31, infoPort=35339, infoSecurePort=0, ipcPort=35143, storageInfo=lv=-57;cid=testClusterID;nsid=1331944918;c=1704192928642):Got exception while serving BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004 to /127.0.0.1:33850 org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004 at org.apache.hadoop.hdfs.server.datanode.BlockSender.getReplica(BlockSender.java:493) at org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:256) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:595) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:145) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:100) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2024-01-02 10:55:33,893 WARN [Thread-177] impl.BlockReaderFactory(768): I/O error constructing remote block reader. java.io.IOException: Got error, status=ERROR, status message opReadBlock BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004 received exception org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004, for OP_READ_BLOCK, self=/127.0.0.1:33850, remote=/127.0.0.1:44277, for file /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testReplicationSourceWALReaderWithPartialWALEntryFailingFilter.1704192933467, for pool BP-4569960-172.31.14.131-1704192928642 block 1073741828_1004 at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:118) at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.checkSuccess(BlockReaderRemote2.java:445) at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.newBlockReader(BlockReaderRemote2.java:413) at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:864) at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:753) at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:387) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:717) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:665) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:941) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:996) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:772) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:333) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:345) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readHeader(ProtobufLogReader.java:194) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initInternal(ProtobufLogReader.java:222) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initReader(ProtobufLogReader.java:176) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:62) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:171) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:329) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:311) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:300) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:439) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.openReader(WALEntryStream.java:338) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.resetReader(WALEntryStream.java:394) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:180) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:102) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.tryAdvanceStreamAndCreateWALBatch(ReplicationSourceWALReader.java:258) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) 2024-01-02 10:55:33,894 ERROR [DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33850 [Sending block BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004]] datanode.DataXceiver(323): 127.0.0.1:44277:DataXceiver error processing READ_BLOCK operation src: /127.0.0.1:33850 dst: /127.0.0.1:44277 org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004 at org.apache.hadoop.hdfs.server.datanode.BlockSender.getReplica(BlockSender.java:493) at org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:256) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:595) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:145) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:100) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) at java.lang.Thread.run(Thread.java:750) 2024-01-02 10:55:33,896 WARN [Thread-177] hdfs.DFSInputStream(687): Failed to connect to /127.0.0.1:44277 for block BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004, add to deadNodes and continue. java.io.IOException: Got error, status=ERROR, status message opReadBlock BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004 received exception org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-4569960-172.31.14.131-1704192928642:blk_1073741828_1004, for OP_READ_BLOCK, self=/127.0.0.1:33850, remote=/127.0.0.1:44277, for file /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testReplicationSourceWALReaderWithPartialWALEntryFailingFilter.1704192933467, for pool BP-4569960-172.31.14.131-1704192928642 block 1073741828_1004 at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:118) at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.checkSuccess(BlockReaderRemote2.java:445) at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote2.newBlockReader(BlockReaderRemote2.java:413) at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:864) at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:753) at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:387) at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:717) at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:665) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:941) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:996) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:772) at java.io.FilterInputStream.read(FilterInputStream.java:83) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:333) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:345) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readHeader(ProtobufLogReader.java:194) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initInternal(ProtobufLogReader.java:222) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.initReader(ProtobufLogReader.java:176) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:62) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.init(ProtobufLogReader.java:171) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:329) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:311) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:300) at org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:439) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.openReader(WALEntryStream.java:338) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.resetReader(WALEntryStream.java:394) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:180) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:102) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.tryAdvanceStreamAndCreateWALBatch(ReplicationSourceWALReader.java:258) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) 2024-01-02 10:55:33,896 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testEOFExceptionForRecoveredQueueWithMultipleLogs Thread=147, OpenFileDescriptor=461, MaxFileDescriptor=60000, SystemLoadAverage=213, ProcessCount=167, AvailableMemoryMB=4941 2024-01-02 10:55:33,898 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:33,899 DEBUG [Thread-177] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testReplicationSourceWALReaderWithPartialWALEntryFailingFilter.1704192933467 2024-01-02 10:55:33,900 DEBUG [Thread-177] regionserver.ReplicationSourceWALReader(162): Read 0 WAL entries eligible for replication 2024-01-02 10:55:33,902 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testEOFExceptionForRecoveredQueueWithMultipleLogs, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionForRecoveredQueueWithMultipleLogs, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:33,931 DEBUG [AsyncFSWAL-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:33,933 DEBUG [AsyncFSWAL-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:33,933 DEBUG [AsyncFSWAL-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:33,946 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionForRecoveredQueueWithMultipleLogs/testEOFExceptionForRecoveredQueueWithMultipleLogs.1704192933904 2024-01-02 10:55:33,946 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK]] 2024-01-02 10:55:34,064 DEBUG [Listener at localhost/42899] util.CommonFSUtils$DfsBuilderUtility(753): Could not find replicate method on builder; will not set replicate when creating output stream java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DistributedFileSystem$HdfsDataOutputStreamBuilder.replicate() at java.lang.Class.getMethod(Class.java:1786) at org.apache.hadoop.hbase.util.CommonFSUtils$DfsBuilderUtility.(CommonFSUtils.java:750) at org.apache.hadoop.hbase.util.CommonFSUtils.createForWal(CommonFSUtils.java:802) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter.initOutput(ProtobufLogWriter.java:102) at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:160) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:78) at org.apache.hadoop.hbase.wal.FSHLogProvider.createWriter(FSHLogProvider.java:62) at org.apache.hadoop.hbase.wal.WALFactory.createWALWriter(WALFactory.java:478) at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream.testEOFExceptionForRecoveredQueueWithMultipleLogs(TestBasicWALEntryStream.java:642) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) 2024-01-02 10:55:34,108 INFO [Listener at localhost/42899] regionserver.ReplicationSourceWALReader(119): peerClusterZnode=null, ReplicationSourceWALReaderThread : null inited, replicationBatchSizeCapacity=67108864, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=10 2024-01-02 10:55:34,121 DEBUG [Thread-201] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/user/jenkins/log.1 2024-01-02 10:55:34,125 WARN [Thread-201] regionserver.ReplicationSourceWALReader(302): Forcing removal of 0 length log in queue: hdfs://localhost:43439/user/jenkins/log.2 2024-01-02 10:55:34,125 DEBUG [Thread-201] regionserver.ReplicationSourceWALReader(330): Read 3 WAL entries eligible for replication 2024-01-02 10:55:34,126 DEBUG [Thread-201] regionserver.ReplicationSourceWALReader(246): Stopping the replication source wal reader 2024-01-02 10:55:34,137 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:34,138 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testEOFExceptionForRecoveredQueueWithMultipleLogs:(num 1704192933904) 2024-01-02 10:55:34,147 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testEOFExceptionForRecoveredQueueWithMultipleLogs Thread=148 (was 147) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:46232 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-5-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33888 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:36812 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=480 (was 461) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=213 (was 213), ProcessCount=167 (was 167), AvailableMemoryMB=4939 (was 4941) 2024-01-02 10:55:34,157 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testEOFExceptionInOldWALsDirectory Thread=148, OpenFileDescriptor=480, MaxFileDescriptor=60000, SystemLoadAverage=213, ProcessCount=167, AvailableMemoryMB=4938 2024-01-02 10:55:34,159 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:34,163 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testEOFExceptionInOldWALsDirectory, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionInOldWALsDirectory, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:34,182 DEBUG [AsyncFSWAL-6-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:34,184 DEBUG [AsyncFSWAL-6-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:34,184 DEBUG [AsyncFSWAL-6-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:34,189 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionInOldWALsDirectory/testEOFExceptionInOldWALsDirectory.1704192934164 2024-01-02 10:55:34,190 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:34,209 DEBUG [AsyncFSWAL-6-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:34,209 DEBUG [AsyncFSWAL-6-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:34,210 DEBUG [AsyncFSWAL-6-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:34,214 INFO [Listener at localhost/42899] wal.AbstractFSWAL(802): Rolled WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionInOldWALsDirectory/testEOFExceptionInOldWALsDirectory.1704192934164 with entries=0, filesize=83 B; new WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionInOldWALsDirectory/testEOFExceptionInOldWALsDirectory.1704192934190 2024-01-02 10:55:34,214 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:34,214 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(716): hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionInOldWALsDirectory/testEOFExceptionInOldWALsDirectory.1704192934164 is not closed yet, will try archiving it next time 2024-01-02 10:55:34,215 INFO [Listener at localhost/42899] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2024-01-02 10:55:34,222 INFO [WAL-Archive-0] wal.AbstractFSWAL(783): Archiving hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionInOldWALsDirectory/testEOFExceptionInOldWALsDirectory.1704192934164 to hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testEOFExceptionInOldWALsDirectory.1704192934164 2024-01-02 10:55:34,317 INFO [Listener at localhost/42899] wal.AbstractFSWALProvider(464): Log hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionInOldWALsDirectory/testEOFExceptionInOldWALsDirectory.1704192934164 was moved to hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testEOFExceptionInOldWALsDirectory.1704192934164 2024-01-02 10:55:34,330 INFO [Listener at localhost/42899] regionserver.ReplicationSourceWALReader(119): peerClusterZnode=null, ReplicationSourceWALReaderThread : null inited, replicationBatchSizeCapacity=67108864, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2024-01-02 10:55:34,331 INFO [Listener at localhost/42899] hbase.Waiter(180): Waiting up to [10,000] milli-secs(wait.for.ratio=[1]) 2024-01-02 10:55:34,334 INFO [Thread-215] wal.AbstractFSWALProvider(464): Log hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionInOldWALsDirectory/testEOFExceptionInOldWALsDirectory.1704192934164 was moved to hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testEOFExceptionInOldWALsDirectory.1704192934164 2024-01-02 10:55:34,338 INFO [Thread-215] wal.AbstractFSWALProvider(464): Log hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionInOldWALsDirectory/testEOFExceptionInOldWALsDirectory.1704192934164 was moved to hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testEOFExceptionInOldWALsDirectory.1704192934164 2024-01-02 10:55:34,339 WARN [Thread-215] regionserver.ReplicationSourceWALReader(302): Forcing removal of 0 length log in queue: hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testEOFExceptionInOldWALsDirectory.1704192934164 2024-01-02 10:55:34,346 DEBUG [Thread-215] wal.ProtobufLogReader(420): EOF at position 83 2024-01-02 10:55:34,364 DEBUG [Thread-215] wal.ProtobufLogReader(420): EOF at position 83 2024-01-02 10:55:34,381 DEBUG [Thread-215] wal.ProtobufLogReader(420): EOF at position 83 2024-01-02 10:55:34,399 DEBUG [Thread-215] wal.ProtobufLogReader(420): EOF at position 83 2024-01-02 10:55:34,418 DEBUG [Thread-215] wal.ProtobufLogReader(420): EOF at position 83 2024-01-02 10:55:34,443 DEBUG [Thread-215] wal.ProtobufLogReader(425): Encountered a malformed edit, seeking back to last good position in file, from 91 to 83 java.io.EOFException: Invalid PB, EOF? Ignoring; originalPosition=83, currentPosition=91 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:354) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:95) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:83) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:259) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:173) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:102) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.tryAdvanceStreamAndCreateWALBatch(ReplicationSourceWALReader.java:258) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: encoded_region_name, table_name, log_sequence_number, write_time at org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:232) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:237) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.parseDelimitedFrom(ProtobufUtil.java:3578) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:348) ... 7 more 2024-01-02 10:55:34,449 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:34,449 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testEOFExceptionInOldWALsDirectory:(num 1704192934190) 2024-01-02 10:55:34,450 WARN [Thread-215] regionserver.WALEntryStream(207): Couldn't get file length information about log hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionInOldWALsDirectory/testEOFExceptionInOldWALsDirectory.1704192934190, it was closed cleanly currently replicating from: hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionInOldWALsDirectory/testEOFExceptionInOldWALsDirectory.1704192934190 at position: 0 2024-01-02 10:55:34,451 DEBUG [Thread-215] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionInOldWALsDirectory/testEOFExceptionInOldWALsDirectory.1704192934190 2024-01-02 10:55:34,451 DEBUG [Thread-215] regionserver.ReplicationSourceWALReader(162): Read 0 WAL entries eligible for replication 2024-01-02 10:55:34,462 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testEOFExceptionInOldWALsDirectory Thread=150 (was 148) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:46232 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33888 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Thread-215 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.util.concurrent.AsyncGet$Util.wait(AsyncGet.java:59) org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1528) org.apache.hadoop.ipc.Client.call(Client.java:1486) org.apache.hadoop.ipc.Client.call(Client.java:1385) org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118) com.sun.proxy.$Proxy31.getFileInfo(Unknown Source) org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:800) sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:498) org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422) org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165) org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157) org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359) com.sun.proxy.$Proxy34.getFileInfo(Unknown Source) org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1652) org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1523) org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1520) org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1520) org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.checkAllBytesParsed(WALEntryStream.java:205) org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:183) org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:102) org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.tryAdvanceStreamAndCreateWALBatch(ReplicationSourceWALReader.java:258) org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:36812 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-6-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=483 (was 480) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=213 (was 213), ProcessCount=167 (was 167), AvailableMemoryMB=4937 (was 4938) 2024-01-02 10:55:34,474 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReplicationSourceWALReaderRecovered Thread=150, OpenFileDescriptor=483, MaxFileDescriptor=60000, SystemLoadAverage=213, ProcessCount=167, AvailableMemoryMB=4937 2024-01-02 10:55:34,475 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:34,479 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testReplicationSourceWALReaderRecovered, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderRecovered, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:34,499 DEBUG [AsyncFSWAL-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:34,502 DEBUG [AsyncFSWAL-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:34,502 DEBUG [AsyncFSWAL-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:34,505 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderRecovered/testReplicationSourceWALReaderRecovered.1704192934480 2024-01-02 10:55:34,506 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:34,552 DEBUG [AsyncFSWAL-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:34,552 DEBUG [AsyncFSWAL-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:34,553 DEBUG [AsyncFSWAL-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:34,578 INFO [Listener at localhost/42899] wal.AbstractFSWAL(802): Rolled WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderRecovered/testReplicationSourceWALReaderRecovered.1704192934480 with entries=10, filesize=1.30 KB; new WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderRecovered/testReplicationSourceWALReaderRecovered.1704192934509 2024-01-02 10:55:34,579 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK]] 2024-01-02 10:55:34,579 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(716): hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderRecovered/testReplicationSourceWALReaderRecovered.1704192934480 is not closed yet, will try archiving it next time 2024-01-02 10:55:34,593 INFO [Listener at localhost/42899] regionserver.ReplicationSourceWALReader(119): peerClusterZnode=null, ReplicationSourceWALReaderThread : null inited, replicationBatchSizeCapacity=67108864, replicationBatchCountCapacity=10, replicationBatchQueueCapacity=1 2024-01-02 10:55:34,602 DEBUG [Thread-234] regionserver.ReplicationSourceWALReader(162): Read 10 WAL entries eligible for replication 2024-01-02 10:55:34,607 DEBUG [Thread-234] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderRecovered/testReplicationSourceWALReaderRecovered.1704192934480 2024-01-02 10:55:34,612 DEBUG [Thread-234] regionserver.ReplicationSourceWALReader(162): Read 0 WAL entries eligible for replication 2024-01-02 10:55:34,617 DEBUG [Thread-234] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderRecovered/testReplicationSourceWALReaderRecovered.1704192934509 2024-01-02 10:55:34,618 DEBUG [Thread-234] regionserver.ReplicationSourceWALReader(162): Read 5 WAL entries eligible for replication 2024-01-02 10:55:34,618 DEBUG [Thread-234] regionserver.ReplicationSourceWALReader(246): Stopping the replication source wal reader 2024-01-02 10:55:34,622 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:34,622 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testReplicationSourceWALReaderRecovered:(num 1704192934509) 2024-01-02 10:55:34,632 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReplicationSourceWALReaderRecovered Thread=151 (was 150) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:46232 [Waiting for operation #16] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33888 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-7-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=486 (was 483) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=213 (was 213), ProcessCount=167 (was 167), AvailableMemoryMB=4935 (was 4937) 2024-01-02 10:55:34,643 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testEOFExceptionForRecoveredQueue Thread=151, OpenFileDescriptor=486, MaxFileDescriptor=60000, SystemLoadAverage=213, ProcessCount=167, AvailableMemoryMB=4935 2024-01-02 10:55:34,644 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:34,648 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testEOFExceptionForRecoveredQueue, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionForRecoveredQueue, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:34,666 DEBUG [AsyncFSWAL-8-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:34,667 DEBUG [AsyncFSWAL-8-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:34,668 DEBUG [AsyncFSWAL-8-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:34,671 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEOFExceptionForRecoveredQueue/testEOFExceptionForRecoveredQueue.1704192934649 2024-01-02 10:55:34,672 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK]] 2024-01-02 10:55:34,679 INFO [Listener at localhost/42899] regionserver.ReplicationSourceWALReader(119): peerClusterZnode=null, ReplicationSourceWALReaderThread : null inited, replicationBatchSizeCapacity=67108864, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=10 2024-01-02 10:55:34,686 WARN [Thread-242] regionserver.ReplicationSourceWALReader(302): Forcing removal of 0 length log in queue: emptyLog 2024-01-02 10:55:34,686 DEBUG [Thread-242] regionserver.ReplicationSourceWALReader(246): Stopping the replication source wal reader 2024-01-02 10:55:34,702 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:34,702 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testEOFExceptionForRecoveredQueue:(num 1704192934649) 2024-01-02 10:55:34,716 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testEOFExceptionForRecoveredQueue Thread=152 (was 151) Potentially hanging thread: AsyncFSWAL-8-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=489 (was 486) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=213 (was 213), ProcessCount=167 (was 167), AvailableMemoryMB=4935 (was 4935) 2024-01-02 10:55:34,731 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReplicationSourceWALReaderWrongPosition Thread=152, OpenFileDescriptor=489, MaxFileDescriptor=60000, SystemLoadAverage=213, ProcessCount=167, AvailableMemoryMB=4935 2024-01-02 10:55:34,733 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:34,739 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testReplicationSourceWALReaderWrongPosition, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWrongPosition, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:34,778 DEBUG [AsyncFSWAL-9-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:34,781 DEBUG [AsyncFSWAL-9-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:34,781 DEBUG [AsyncFSWAL-9-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:34,808 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWrongPosition/testReplicationSourceWALReaderWrongPosition.1704192934740 2024-01-02 10:55:34,809 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:34,835 DEBUG [AsyncFSWAL-9-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:34,836 DEBUG [AsyncFSWAL-9-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:34,836 DEBUG [AsyncFSWAL-9-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:34,853 INFO [Listener at localhost/42899] wal.AbstractFSWAL(802): Rolled WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWrongPosition/testReplicationSourceWALReaderWrongPosition.1704192934740 with entries=1, filesize=208 B; new WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWrongPosition/testReplicationSourceWALReaderWrongPosition.1704192934812 2024-01-02 10:55:34,856 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK]] 2024-01-02 10:55:34,856 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(716): hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWrongPosition/testReplicationSourceWALReaderWrongPosition.1704192934740 is not closed yet, will try archiving it next time 2024-01-02 10:55:34,872 INFO [Listener at localhost/42899] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2024-01-02 10:55:34,876 INFO [Listener at localhost/42899] regionserver.ReplicationSourceWALReader(119): peerClusterZnode=null, ReplicationSourceWALReaderThread : null inited, replicationBatchSizeCapacity=67108864, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2024-01-02 10:55:34,897 DEBUG [Thread-255] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWrongPosition/testReplicationSourceWALReaderWrongPosition.1704192934740 2024-01-02 10:55:34,908 DEBUG [Thread-255] regionserver.ReplicationSourceWALReader(162): Read 1 WAL entries eligible for replication 2024-01-02 10:55:34,916 DEBUG [Thread-255] wal.ProtobufLogReader(420): EOF at position 2583 2024-01-02 10:55:34,916 DEBUG [Thread-255] regionserver.ReplicationSourceWALReader(162): Read 20 WAL entries eligible for replication 2024-01-02 10:55:34,916 DEBUG [Thread-255] wal.ProtobufLogReader(420): EOF at position 2583 2024-01-02 10:55:34,949 DEBUG [AsyncFSWAL-9-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:34,950 DEBUG [AsyncFSWAL-9-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:34,950 DEBUG [AsyncFSWAL-9-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:34,970 INFO [Listener at localhost/42899] wal.AbstractFSWAL(802): Rolled WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWrongPosition/testReplicationSourceWALReaderWrongPosition.1704192934812 with entries=20, filesize=2.52 KB; new WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWrongPosition/testReplicationSourceWALReaderWrongPosition.1704192934917 2024-01-02 10:55:34,970 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK]] 2024-01-02 10:55:34,970 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(716): hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWrongPosition/testReplicationSourceWALReaderWrongPosition.1704192934812 is not closed yet, will try archiving it next time 2024-01-02 10:55:35,002 DEBUG [Thread-255] wal.ProtobufLogReader(425): Encountered a malformed edit, seeking back to last good position in file, from 2591 to 2583 java.io.EOFException: Invalid PB, EOF? Ignoring; originalPosition=2583, currentPosition=2591 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:354) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:95) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:83) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:259) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:173) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:102) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.tryAdvanceStreamAndCreateWALBatch(ReplicationSourceWALReader.java:258) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: encoded_region_name, table_name, log_sequence_number, write_time at org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:232) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:237) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.parseDelimitedFrom(ProtobufUtil.java:3578) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:348) ... 7 more 2024-01-02 10:55:35,012 DEBUG [Thread-255] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWrongPosition/testReplicationSourceWALReaderWrongPosition.1704192934812 2024-01-02 10:55:35,031 DEBUG [Thread-255] regionserver.ReplicationSourceWALReader(162): Read 0 WAL entries eligible for replication 2024-01-02 10:55:35,035 DEBUG [Thread-255] wal.ProtobufLogReader(420): EOF at position 1333 2024-01-02 10:55:35,035 DEBUG [Thread-255] regionserver.ReplicationSourceWALReader(162): Read 10 WAL entries eligible for replication 2024-01-02 10:55:35,036 DEBUG [Thread-255] wal.ProtobufLogReader(420): EOF at position 1333 2024-01-02 10:55:35,057 DEBUG [Thread-255] wal.ProtobufLogReader(425): Encountered a malformed edit, seeking back to last good position in file, from 1341 to 1333 java.io.EOFException: Invalid PB, EOF? Ignoring; originalPosition=1333, currentPosition=1341 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:354) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:95) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:83) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:259) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:173) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:102) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.tryAdvanceStreamAndCreateWALBatch(ReplicationSourceWALReader.java:258) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145) Caused by: org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: encoded_region_name, table_name, log_sequence_number, write_time at org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:232) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:237) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.parseDelimitedFrom(ProtobufUtil.java:3578) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:348) ... 7 more 2024-01-02 10:55:35,064 DEBUG [Thread-255] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWrongPosition/testReplicationSourceWALReaderWrongPosition.1704192934917 2024-01-02 10:55:35,065 DEBUG [Thread-255] regionserver.ReplicationSourceWALReader(162): Read 0 WAL entries eligible for replication 2024-01-02 10:55:35,070 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 3 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:35,070 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testReplicationSourceWALReaderWrongPosition:(num 1704192934917) 2024-01-02 10:55:35,088 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReplicationSourceWALReaderWrongPosition Thread=155 (was 152) Potentially hanging thread: AsyncFSWAL-9-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:46232 [Waiting for operation #21] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Thread-255 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.handleEmptyWALEntryBatch(ReplicationSourceWALReader.java:251) org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:148) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33888 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:36812 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Parameter Sending Thread #1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=492 (was 489) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=213 (was 213), ProcessCount=167 (was 167), AvailableMemoryMB=4918 (was 4935) 2024-01-02 10:55:35,104 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testWALKeySerialization Thread=155, OpenFileDescriptor=492, MaxFileDescriptor=60000, SystemLoadAverage=213, ProcessCount=167, AvailableMemoryMB=4915 2024-01-02 10:55:35,106 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:35,112 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testWALKeySerialization, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testWALKeySerialization, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:35,161 DEBUG [AsyncFSWAL-10-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:35,164 DEBUG [AsyncFSWAL-10-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:35,164 DEBUG [AsyncFSWAL-10-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:35,174 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testWALKeySerialization/testWALKeySerialization.1704192935113 2024-01-02 10:55:35,177 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK]] 2024-01-02 10:55:35,204 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:35,205 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testWALKeySerialization:(num 1704192935113) 2024-01-02 10:55:35,218 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testWALKeySerialization Thread=156 (was 155) Potentially hanging thread: AsyncFSWAL-10-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=495 (was 492) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=213 (was 213), ProcessCount=167 (was 167), AvailableMemoryMB=4914 (was 4915) 2024-01-02 10:55:35,233 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReplicationSourceWALReaderDisabled Thread=156, OpenFileDescriptor=495, MaxFileDescriptor=60000, SystemLoadAverage=213, ProcessCount=167, AvailableMemoryMB=4913 2024-01-02 10:55:35,234 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:35,240 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testReplicationSourceWALReaderDisabled, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderDisabled, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:35,266 DEBUG [AsyncFSWAL-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:35,278 DEBUG [AsyncFSWAL-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:35,278 DEBUG [AsyncFSWAL-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:35,295 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderDisabled/testReplicationSourceWALReaderDisabled.1704192935241 2024-01-02 10:55:35,296 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:35,314 INFO [Listener at localhost/42899] regionserver.ReplicationSourceWALReader(119): peerClusterZnode=null, ReplicationSourceWALReaderThread : null inited, replicationBatchSizeCapacity=67108864, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2024-01-02 10:55:35,319 INFO [Listener at localhost/42899] hbase.Waiter(180): Waiting up to [30,000] milli-secs(wait.for.ratio=[1]) 2024-01-02 10:55:35,428 DEBUG [Thread-279] wal.ProtobufLogReader(420): EOF at position 458 2024-01-02 10:55:35,428 DEBUG [Thread-279] regionserver.ReplicationSourceWALReader(162): Read 3 WAL entries eligible for replication 2024-01-02 10:55:35,428 DEBUG [Thread-279] wal.ProtobufLogReader(420): EOF at position 458 2024-01-02 10:55:35,435 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:35,436 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testReplicationSourceWALReaderDisabled:(num 1704192935241) 2024-01-02 10:55:35,440 INFO [Thread-279] wal.AbstractFSWALProvider(464): Log hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderDisabled/testReplicationSourceWALReaderDisabled.1704192935241 was moved to hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testReplicationSourceWALReaderDisabled.1704192935241 2024-01-02 10:55:35,446 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReplicationSourceWALReaderDisabled Thread=159 (was 156) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:46232 [Waiting for operation #22] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33888 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-11-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Thread-279 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.handleEmptyWALEntryBatch(ReplicationSourceWALReader.java:251) org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:148) Potentially hanging thread: ForkJoinPool.commonPool-worker-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) - Thread LEAK? -, OpenFileDescriptor=498 (was 495) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=213 (was 213), ProcessCount=167 (was 167), AvailableMemoryMB=4910 (was 4913) 2024-01-02 10:55:35,448 DEBUG [Thread-279] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testReplicationSourceWALReaderDisabled.1704192935241 2024-01-02 10:55:35,449 DEBUG [Thread-279] regionserver.ReplicationSourceWALReader(162): Read 0 WAL entries eligible for replication 2024-01-02 10:55:35,456 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testCleanClosedWALs Thread=159, OpenFileDescriptor=498, MaxFileDescriptor=60000, SystemLoadAverage=213, ProcessCount=167, AvailableMemoryMB=4910 2024-01-02 10:55:35,458 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:35,462 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testCleanClosedWALs, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testCleanClosedWALs, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:35,478 DEBUG [AsyncFSWAL-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:35,478 DEBUG [AsyncFSWAL-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:35,479 DEBUG [AsyncFSWAL-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:35,481 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testCleanClosedWALs/testCleanClosedWALs.1704192935462 2024-01-02 10:55:35,483 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:35,487 INFO [Listener at localhost/42899] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2024-01-02 10:55:35,509 DEBUG [AsyncFSWAL-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:35,510 DEBUG [AsyncFSWAL-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:35,510 DEBUG [AsyncFSWAL-12-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:35,513 INFO [Listener at localhost/42899] wal.AbstractFSWAL(802): Rolled WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testCleanClosedWALs/testCleanClosedWALs.1704192935462 with entries=1, filesize=208 B; new WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testCleanClosedWALs/testCleanClosedWALs.1704192935494 2024-01-02 10:55:35,516 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:35,516 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(716): hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testCleanClosedWALs/testCleanClosedWALs.1704192935462 is not closed yet, will try archiving it next time 2024-01-02 10:55:35,519 INFO [Listener at localhost/42899] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2024-01-02 10:55:35,519 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 208 2024-01-02 10:55:35,524 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(425): Encountered a malformed edit, seeking back to last good position in file, from 216 to 208 java.io.EOFException: Invalid PB, EOF? Ignoring; originalPosition=208, currentPosition=216 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:354) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:95) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:83) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:259) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:181) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:102) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.peek(WALEntryStream.java:111) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.next(WALEntryStream.java:118) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStreamTestBase$WALEntryStreamWithRetries.access$001(WALEntryStreamTestBase.java:82) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStreamTestBase$WALEntryStreamWithRetries.lambda$next$0(WALEntryStreamTestBase.java:95) at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:183) at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:134) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStreamTestBase$WALEntryStreamWithRetries.next(WALEntryStreamTestBase.java:94) at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream.testCleanClosedWALs(TestBasicWALEntryStream.java:726) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: encoded_region_name, table_name, log_sequence_number, write_time at org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:232) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:237) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.parseDelimitedFrom(ProtobufUtil.java:3578) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:348) ... 40 more 2024-01-02 10:55:35,526 WARN [Listener at localhost/42899] regionserver.WALEntryStream(222): Reached the end of WAL hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testCleanClosedWALs/testCleanClosedWALs.1704192935462. It was not closed cleanly, so we did not parse 8 bytes of data. 2024-01-02 10:55:35,526 DEBUG [Listener at localhost/42899] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testCleanClosedWALs/testCleanClosedWALs.1704192935462 2024-01-02 10:55:35,537 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testCleanClosedWALs/testCleanClosedWALs.1704192935494 not finished, retry = 0 2024-01-02 10:55:35,644 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:35,645 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testCleanClosedWALs:(num 1704192935494) 2024-01-02 10:55:35,661 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testCleanClosedWALs Thread=160 (was 159) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:46232 [Waiting for operation #23] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-12-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:33888 [Waiting for operation #14] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-1174331895_17 at /127.0.0.1:36812 [Waiting for operation #12] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=501 (was 498) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=213 (was 213), ProcessCount=167 (was 167), AvailableMemoryMB=4908 (was 4910) 2024-01-02 10:55:35,675 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReadBeyondCommittedLength Thread=160, OpenFileDescriptor=501, MaxFileDescriptor=60000, SystemLoadAverage=213, ProcessCount=167, AvailableMemoryMB=4908 2024-01-02 10:55:35,677 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:35,682 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testReadBeyondCommittedLength, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReadBeyondCommittedLength, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:35,704 DEBUG [AsyncFSWAL-13-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:35,707 DEBUG [AsyncFSWAL-13-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:35,708 DEBUG [AsyncFSWAL-13-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:35,714 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReadBeyondCommittedLength/testReadBeyondCommittedLength.1704192935683 2024-01-02 10:55:35,716 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK]] 2024-01-02 10:55:35,729 DEBUG [Listener at localhost/42899] regionserver.WALEntryStream(272): The provider tells us the valid length for hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReadBeyondCommittedLength/testReadBeyondCommittedLength.1704192935683 is 318, but we have advanced to 319 2024-01-02 10:55:36,749 DEBUG [Listener at localhost/42899] regionserver.WALEntryStream(272): The provider tells us the valid length for hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReadBeyondCommittedLength/testReadBeyondCommittedLength.1704192935683 is 318, but we have advanced to 319 2024-01-02 10:55:36,762 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 319 2024-01-02 10:55:36,771 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:36,771 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testReadBeyondCommittedLength:(num 1704192935683) 2024-01-02 10:55:36,782 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReadBeyondCommittedLength Thread=161 (was 160) - Thread LEAK? -, OpenFileDescriptor=504 (was 501) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=236 (was 213) - SystemLoadAverage LEAK? -, ProcessCount=167 (was 167), AvailableMemoryMB=4877 (was 4908) 2024-01-02 10:55:36,792 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testResumeStreamingFromPosition Thread=161, OpenFileDescriptor=504, MaxFileDescriptor=60000, SystemLoadAverage=236, ProcessCount=167, AvailableMemoryMB=4877 2024-01-02 10:55:36,793 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:36,797 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testResumeStreamingFromPosition, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testResumeStreamingFromPosition, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:36,814 DEBUG [AsyncFSWAL-14-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:36,815 DEBUG [AsyncFSWAL-14-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:36,815 DEBUG [AsyncFSWAL-14-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:36,818 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testResumeStreamingFromPosition/testResumeStreamingFromPosition.1704192936797 2024-01-02 10:55:36,820 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:36,839 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 437 2024-01-02 10:55:36,845 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:36,846 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testResumeStreamingFromPosition:(num 1704192936797) 2024-01-02 10:55:36,855 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testResumeStreamingFromPosition Thread=162 (was 161) - Thread LEAK? -, OpenFileDescriptor=507 (was 504) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=236 (was 236), ProcessCount=167 (was 167), AvailableMemoryMB=4876 (was 4877) 2024-01-02 10:55:36,865 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReplicationSourceWALReader Thread=162, OpenFileDescriptor=507, MaxFileDescriptor=60000, SystemLoadAverage=236, ProcessCount=167, AvailableMemoryMB=4876 2024-01-02 10:55:36,866 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:36,869 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testReplicationSourceWALReader, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReader, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:36,884 DEBUG [AsyncFSWAL-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:36,885 DEBUG [AsyncFSWAL-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:36,886 DEBUG [AsyncFSWAL-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:36,888 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReader/testReplicationSourceWALReader.1704192936870 2024-01-02 10:55:36,889 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:36,898 INFO [Listener at localhost/42899] regionserver.ReplicationSourceWALReader(119): peerClusterZnode=null, ReplicationSourceWALReaderThread : null inited, replicationBatchSizeCapacity=67108864, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2024-01-02 10:55:36,906 DEBUG [Thread-321] wal.ProtobufLogReader(420): EOF at position 458 2024-01-02 10:55:36,906 DEBUG [Thread-321] regionserver.ReplicationSourceWALReader(162): Read 3 WAL entries eligible for replication 2024-01-02 10:55:36,906 DEBUG [Thread-321] wal.ProtobufLogReader(420): EOF at position 458 2024-01-02 10:55:36,922 DEBUG [Thread-321] wal.ProtobufLogReader(420): EOF at position 578 2024-01-02 10:55:36,923 DEBUG [Thread-321] regionserver.ReplicationSourceWALReader(162): Read 1 WAL entries eligible for replication 2024-01-02 10:55:36,923 DEBUG [Thread-321] wal.ProtobufLogReader(420): EOF at position 578 2024-01-02 10:55:36,931 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:36,932 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testReplicationSourceWALReader:(num 1704192936870) 2024-01-02 10:55:36,935 INFO [Thread-321] wal.AbstractFSWALProvider(464): Log hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReader/testReplicationSourceWALReader.1704192936870 was moved to hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testReplicationSourceWALReader.1704192936870 2024-01-02 10:55:36,943 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReplicationSourceWALReader Thread=164 (was 162) - Thread LEAK? -, OpenFileDescriptor=510 (was 507) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=236 (was 236), ProcessCount=167 (was 167), AvailableMemoryMB=4874 (was 4876) 2024-01-02 10:55:36,945 DEBUG [Thread-321] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testReplicationSourceWALReader.1704192936870 2024-01-02 10:55:36,945 DEBUG [Thread-321] regionserver.ReplicationSourceWALReader(162): Read 0 WAL entries eligible for replication 2024-01-02 10:55:36,955 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReplicationSourceWALReaderWithFailingFilter Thread=164, OpenFileDescriptor=510, MaxFileDescriptor=60000, SystemLoadAverage=236, ProcessCount=167, AvailableMemoryMB=4873 2024-01-02 10:55:36,956 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:36,961 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testReplicationSourceWALReaderWithFailingFilter, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWithFailingFilter, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:36,977 DEBUG [AsyncFSWAL-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:36,978 DEBUG [AsyncFSWAL-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:36,978 DEBUG [AsyncFSWAL-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:36,981 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWithFailingFilter/testReplicationSourceWALReaderWithFailingFilter.1704192936962 2024-01-02 10:55:36,981 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:36,993 INFO [Listener at localhost/42899] regionserver.ReplicationSourceWALReader(119): peerClusterZnode=null, ReplicationSourceWALReaderThread : null inited, replicationBatchSizeCapacity=67108864, replicationBatchCountCapacity=25000, replicationBatchQueueCapacity=1 2024-01-02 10:55:37,004 WARN [Thread-331] regionserver.ReplicationSourceWALReader(177): Failed to read stream of replication entries org.apache.hadoop.hbase.replication.regionserver.WALEntryFilterRetryableException: failing filter at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream$FailingWALEntryFilter.filter(TestBasicWALEntryStream.java:558) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.filterEntry(ReplicationSourceWALReader.java:361) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:224) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:157) 2024-01-02 10:55:37,033 WARN [Thread-331] regionserver.ReplicationSourceWALReader(177): Failed to read stream of replication entries org.apache.hadoop.hbase.replication.regionserver.WALEntryFilterRetryableException: failing filter at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream$FailingWALEntryFilter.filter(TestBasicWALEntryStream.java:558) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.filterEntry(ReplicationSourceWALReader.java:361) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:224) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:157) 2024-01-02 10:55:37,075 WARN [Thread-331] regionserver.ReplicationSourceWALReader(177): Failed to read stream of replication entries org.apache.hadoop.hbase.replication.regionserver.WALEntryFilterRetryableException: failing filter at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream$FailingWALEntryFilter.filter(TestBasicWALEntryStream.java:558) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.filterEntry(ReplicationSourceWALReader.java:361) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:224) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:157) 2024-01-02 10:55:37,127 WARN [Thread-331] regionserver.ReplicationSourceWALReader(177): Failed to read stream of replication entries org.apache.hadoop.hbase.replication.regionserver.WALEntryFilterRetryableException: failing filter at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream$FailingWALEntryFilter.filter(TestBasicWALEntryStream.java:558) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.filterEntry(ReplicationSourceWALReader.java:361) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:224) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:157) 2024-01-02 10:55:37,184 WARN [Thread-331] regionserver.ReplicationSourceWALReader(177): Failed to read stream of replication entries org.apache.hadoop.hbase.replication.regionserver.WALEntryFilterRetryableException: failing filter at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream$FailingWALEntryFilter.filter(TestBasicWALEntryStream.java:558) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.filterEntry(ReplicationSourceWALReader.java:361) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:224) at org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:157) 2024-01-02 10:55:37,252 DEBUG [Thread-331] wal.ProtobufLogReader(420): EOF at position 458 2024-01-02 10:55:37,252 DEBUG [Thread-331] regionserver.ReplicationSourceWALReader(162): Read 3 WAL entries eligible for replication 2024-01-02 10:55:37,252 DEBUG [Thread-331] wal.ProtobufLogReader(420): EOF at position 458 2024-01-02 10:55:37,258 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:37,259 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testReplicationSourceWALReaderWithFailingFilter:(num 1704192936962) 2024-01-02 10:55:37,264 INFO [Thread-331] wal.AbstractFSWALProvider(464): Log hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testReplicationSourceWALReaderWithFailingFilter/testReplicationSourceWALReaderWithFailingFilter.1704192936962 was moved to hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testReplicationSourceWALReaderWithFailingFilter.1704192936962 2024-01-02 10:55:37,270 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testReplicationSourceWALReaderWithFailingFilter Thread=166 (was 164) - Thread LEAK? -, OpenFileDescriptor=513 (was 510) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=236 (was 236), ProcessCount=167 (was 167), AvailableMemoryMB=4869 (was 4873) 2024-01-02 10:55:37,272 DEBUG [Thread-331] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs/testReplicationSourceWALReaderWithFailingFilter.1704192936962 2024-01-02 10:55:37,272 DEBUG [Thread-331] regionserver.ReplicationSourceWALReader(162): Read 0 WAL entries eligible for replication 2024-01-02 10:55:37,281 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testEmptyStream Thread=166, OpenFileDescriptor=513, MaxFileDescriptor=60000, SystemLoadAverage=236, ProcessCount=167, AvailableMemoryMB=4868 2024-01-02 10:55:37,282 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:37,286 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testEmptyStream, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEmptyStream, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:37,301 DEBUG [AsyncFSWAL-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:37,302 DEBUG [AsyncFSWAL-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:37,303 DEBUG [AsyncFSWAL-17-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:37,305 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testEmptyStream/testEmptyStream.1704192937287 2024-01-02 10:55:37,305 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:37,311 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 83 2024-01-02 10:55:37,318 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:37,318 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testEmptyStream:(num 1704192937287) 2024-01-02 10:55:37,329 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testEmptyStream Thread=167 (was 166) - Thread LEAK? -, OpenFileDescriptor=516 (was 513) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=236 (was 236), ProcessCount=167 (was 167), AvailableMemoryMB=4867 (was 4868) 2024-01-02 10:55:37,341 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testAppendsWithRolls Thread=167, OpenFileDescriptor=516, MaxFileDescriptor=60000, SystemLoadAverage=236, ProcessCount=167, AvailableMemoryMB=4867 2024-01-02 10:55:37,342 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:37,346 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testAppendsWithRolls, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testAppendsWithRolls, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:37,361 DEBUG [AsyncFSWAL-18-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:37,362 DEBUG [AsyncFSWAL-18-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:37,363 DEBUG [AsyncFSWAL-18-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:37,365 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testAppendsWithRolls/testAppendsWithRolls.1704192937346 2024-01-02 10:55:37,365 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK]] 2024-01-02 10:55:37,372 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 208 2024-01-02 10:55:37,373 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 208 2024-01-02 10:55:37,373 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 208 2024-01-02 10:55:37,375 INFO [Listener at localhost/42899] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2024-01-02 10:55:37,396 DEBUG [AsyncFSWAL-18-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:37,397 DEBUG [AsyncFSWAL-18-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:37,397 DEBUG [AsyncFSWAL-18-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:37,400 INFO [Listener at localhost/42899] wal.AbstractFSWAL(802): Rolled WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testAppendsWithRolls/testAppendsWithRolls.1704192937346 with entries=3, filesize=458 B; new WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testAppendsWithRolls/testAppendsWithRolls.1704192937382 2024-01-02 10:55:37,400 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:37,400 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(716): hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testAppendsWithRolls/testAppendsWithRolls.1704192937346 is not closed yet, will try archiving it next time 2024-01-02 10:55:37,402 INFO [Listener at localhost/42899] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2024-01-02 10:55:37,406 INFO [Listener at localhost/42899] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2024-01-02 10:55:37,407 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(425): Encountered a malformed edit, seeking back to last good position in file, from 466 to 458 java.io.EOFException: Invalid PB, EOF? Ignoring; originalPosition=458, currentPosition=466 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:354) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:95) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:83) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:259) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:173) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:102) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.peek(WALEntryStream.java:111) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.next(WALEntryStream.java:118) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStreamTestBase$WALEntryStreamWithRetries.access$001(WALEntryStreamTestBase.java:82) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStreamTestBase$WALEntryStreamWithRetries.lambda$next$0(WALEntryStreamTestBase.java:95) at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:183) at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:134) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStreamTestBase$WALEntryStreamWithRetries.next(WALEntryStreamTestBase.java:94) at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream.testAppendsWithRolls(TestBasicWALEntryStream.java:126) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: encoded_region_name, table_name, log_sequence_number, write_time at org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:232) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:237) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.parseDelimitedFrom(ProtobufUtil.java:3578) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:348) ... 40 more 2024-01-02 10:55:37,411 DEBUG [Listener at localhost/42899] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testAppendsWithRolls/testAppendsWithRolls.1704192937346 2024-01-02 10:55:37,416 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 208 2024-01-02 10:55:37,424 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:37,424 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testAppendsWithRolls:(num 1704192937382) 2024-01-02 10:55:37,435 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testAppendsWithRolls Thread=168 (was 167) - Thread LEAK? -, OpenFileDescriptor=519 (was 516) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=236 (was 236), ProcessCount=167 (was 167), AvailableMemoryMB=4867 (was 4867) 2024-01-02 10:55:37,445 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testPosition Thread=168, OpenFileDescriptor=519, MaxFileDescriptor=60000, SystemLoadAverage=236, ProcessCount=167, AvailableMemoryMB=4866 2024-01-02 10:55:37,446 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:37,449 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testPosition, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testPosition, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:37,465 DEBUG [AsyncFSWAL-19-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:37,466 DEBUG [AsyncFSWAL-19-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:37,467 DEBUG [AsyncFSWAL-19-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:37,469 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testPosition/testPosition.1704192937449 2024-01-02 10:55:37,469 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:37,484 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 458 2024-01-02 10:55:37,491 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testPosition/testPosition.1704192937449 not finished, retry = 0 2024-01-02 10:55:37,528 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2024-01-02 10:55:37,596 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:37,596 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testPosition:(num 1704192937449) 2024-01-02 10:55:37,607 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testPosition Thread=170 (was 168) - Thread LEAK? -, OpenFileDescriptor=522 (was 519) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=236 (was 236), ProcessCount=167 (was 167), AvailableMemoryMB=4865 (was 4866) 2024-01-02 10:55:37,617 INFO [Listener at localhost/42899] hbase.ResourceChecker(147): before: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testSizeOfLogQueue Thread=170, OpenFileDescriptor=522, MaxFileDescriptor=60000, SystemLoadAverage=236, ProcessCount=167, AvailableMemoryMB=4865 2024-01-02 10:55:37,617 INFO [Listener at localhost/42899] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2024-01-02 10:55:37,622 INFO [Listener at localhost/42899] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=testSizeOfLogQueue, suffix=, logDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testSizeOfLogQueue, archiveDir=hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs, maxLogs=32 2024-01-02 10:55:37,639 DEBUG [AsyncFSWAL-20-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:37,640 DEBUG [AsyncFSWAL-20-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:37,640 DEBUG [AsyncFSWAL-20-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:37,642 INFO [Listener at localhost/42899] wal.AbstractFSWAL(806): New WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testSizeOfLogQueue/testSizeOfLogQueue.1704192937623 2024-01-02 10:55:37,643 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK]] 2024-01-02 10:55:37,661 DEBUG [AsyncFSWAL-20-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK] 2024-01-02 10:55:37,662 DEBUG [AsyncFSWAL-20-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK] 2024-01-02 10:55:37,662 DEBUG [AsyncFSWAL-20-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = /127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK] 2024-01-02 10:55:37,683 INFO [Listener at localhost/42899] wal.AbstractFSWAL(802): Rolled WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testSizeOfLogQueue/testSizeOfLogQueue.1704192937623 with entries=1, filesize=208 B; new WAL /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testSizeOfLogQueue/testSizeOfLogQueue.1704192937644 2024-01-02 10:55:37,684 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43327,DS-50584779-8bc8-44b8-b8b3-73eb5f90a869,DISK], DatanodeInfoWithStorage[127.0.0.1:36671,DS-1ec67827-86f0-4d42-b3b5-887ba7f05758,DISK], DatanodeInfoWithStorage[127.0.0.1:44277,DS-db37aa7a-664e-44f4-a8c1-32db221a0e3d,DISK]] 2024-01-02 10:55:37,684 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(716): hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testSizeOfLogQueue/testSizeOfLogQueue.1704192937623 is not closed yet, will try archiving it next time 2024-01-02 10:55:37,700 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(425): Encountered a malformed edit, seeking back to last good position in file, from 216 to 208 java.io.EOFException: Invalid PB, EOF? Ignoring; originalPosition=208, currentPosition=216 at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:354) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:95) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:83) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:259) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:173) at org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:102) at org.apache.hadoop.hbase.replication.regionserver.TestBasicWALEntryStream.testSizeOfLogQueue(TestBasicWALEntryStream.java:707) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hbase.thirdparty.com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: encoded_region_name, table_name, log_sequence_number, write_time at org.apache.hbase.thirdparty.com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:79) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:68) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:232) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:237) at org.apache.hbase.thirdparty.com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:48) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.parseDelimitedFrom(ProtobufUtil.java:3578) at org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:348) ... 33 more 2024-01-02 10:55:37,704 DEBUG [Listener at localhost/42899] regionserver.WALEntryStream(248): EOF, closing hdfs://localhost:43439/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/WALs/testSizeOfLogQueue/testSizeOfLogQueue.1704192937623 2024-01-02 10:55:37,711 DEBUG [Listener at localhost/42899] wal.ProtobufLogReader(420): EOF at position 83 2024-01-02 10:55:37,720 DEBUG [Listener at localhost/42899] wal.AbstractFSWAL(1028): Moved 2 WAL file(s) to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/oldWALs 2024-01-02 10:55:37,721 INFO [Listener at localhost/42899] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL testSizeOfLogQueue:(num 1704192937644) 2024-01-02 10:55:37,731 INFO [Listener at localhost/42899] hbase.ResourceChecker(175): after: replication.regionserver.TestBasicWALEntryStreamAsyncFSWAL#testSizeOfLogQueue Thread=171 (was 170) - Thread LEAK? -, OpenFileDescriptor=525 (was 522) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=236 (was 236), ProcessCount=167 (was 167), AvailableMemoryMB=4864 (was 4865) 2024-01-02 10:55:37,731 INFO [Listener at localhost/42899] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2024-01-02 10:55:37,732 WARN [Listener at localhost/42899] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2024-01-02 10:55:37,737 INFO [Listener at localhost/42899] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2024-01-02 10:55:37,840 WARN [BP-4569960-172.31.14.131-1704192928642 heartbeating to localhost/127.0.0.1:43439] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2024-01-02 10:55:37,840 WARN [BP-4569960-172.31.14.131-1704192928642 heartbeating to localhost/127.0.0.1:43439] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-4569960-172.31.14.131-1704192928642 (Datanode Uuid b410da91-94f8-458a-853a-df29a3aede77) service to localhost/127.0.0.1:43439 2024-01-02 10:55:37,843 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/cluster_5195cdae-b675-1741-5385-06d6a585e5bd/dfs/data/data5/current/BP-4569960-172.31.14.131-1704192928642] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2024-01-02 10:55:37,843 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/cluster_5195cdae-b675-1741-5385-06d6a585e5bd/dfs/data/data6/current/BP-4569960-172.31.14.131-1704192928642] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2024-01-02 10:55:37,846 WARN [Listener at localhost/42899] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2024-01-02 10:55:37,850 INFO [Listener at localhost/42899] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2024-01-02 10:55:37,951 WARN [BP-4569960-172.31.14.131-1704192928642 heartbeating to localhost/127.0.0.1:43439] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2024-01-02 10:55:37,952 WARN [BP-4569960-172.31.14.131-1704192928642 heartbeating to localhost/127.0.0.1:43439] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-4569960-172.31.14.131-1704192928642 (Datanode Uuid ab0f122c-aa18-46bb-8258-7abccd88bb31) service to localhost/127.0.0.1:43439 2024-01-02 10:55:37,952 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/cluster_5195cdae-b675-1741-5385-06d6a585e5bd/dfs/data/data3/current/BP-4569960-172.31.14.131-1704192928642] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2024-01-02 10:55:37,953 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/cluster_5195cdae-b675-1741-5385-06d6a585e5bd/dfs/data/data4/current/BP-4569960-172.31.14.131-1704192928642] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2024-01-02 10:55:37,954 WARN [Listener at localhost/42899] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2024-01-02 10:55:37,956 INFO [Listener at localhost/42899] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2024-01-02 10:55:38,057 WARN [BP-4569960-172.31.14.131-1704192928642 heartbeating to localhost/127.0.0.1:43439] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2024-01-02 10:55:38,058 WARN [BP-4569960-172.31.14.131-1704192928642 heartbeating to localhost/127.0.0.1:43439] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-4569960-172.31.14.131-1704192928642 (Datanode Uuid 9f52681b-e55e-48e4-a1d5-28659187cd89) service to localhost/127.0.0.1:43439 2024-01-02 10:55:38,069 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/cluster_5195cdae-b675-1741-5385-06d6a585e5bd/dfs/data/data1/current/BP-4569960-172.31.14.131-1704192928642] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2024-01-02 10:55:38,069 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/test-data/b5d27ec1-32de-cbef-60a5-98da10ab9c10/cluster_5195cdae-b675-1741-5385-06d6a585e5bd/dfs/data/data2/current/BP-4569960-172.31.14.131-1704192928642] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2024-01-02 10:55:38,098 INFO [Listener at localhost/42899] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2024-01-02 10:55:38,257 INFO [Listener at localhost/42899] hbase.HBaseTestingUtility(1293): Minicluster is down