T
- type of record to write.public abstract class AbstractFileStoreWrite<T> extends Object implements FileStoreWrite<T>
FileStoreWrite
implementation.Modifier and Type | Class and Description |
---|---|
static class |
AbstractFileStoreWrite.WriterContainer<T>
RecordWriter with the snapshot id it is created upon and the identifier of its last
modified commit. |
Modifier and Type | Field and Description |
---|---|
protected org.apache.flink.runtime.io.disk.iomanager.IOManager |
ioManager |
protected SnapshotManager |
snapshotManager |
protected Map<org.apache.flink.table.data.binary.BinaryRowData,Map<Integer,AbstractFileStoreWrite.WriterContainer<T>>> |
writers |
Modifier | Constructor and Description |
---|---|
protected |
AbstractFileStoreWrite(String commitUser,
SnapshotManager snapshotManager,
FileStoreScan scan) |
Modifier and Type | Method and Description |
---|---|
void |
close()
Close the writer.
|
void |
compact(org.apache.flink.table.data.binary.BinaryRowData partition,
int bucket,
boolean fullCompaction)
Compact data stored in given partition and bucket.
|
abstract AbstractFileStoreWrite.WriterContainer<T> |
createEmptyWriterContainer(org.apache.flink.table.data.binary.BinaryRowData partition,
int bucket,
ExecutorService compactExecutor)
Create an empty
RecordWriter from partition and bucket. |
abstract AbstractFileStoreWrite.WriterContainer<T> |
createWriterContainer(org.apache.flink.table.data.binary.BinaryRowData partition,
int bucket,
ExecutorService compactExecutor)
Create a
RecordWriter from partition and bucket. |
void |
notifyNewFiles(long snapshotId,
org.apache.flink.table.data.binary.BinaryRowData partition,
int bucket,
List<DataFileMeta> files)
Notify that some new files are created at given snapshot in given bucket.
|
protected void |
notifyNewWriter(RecordWriter<T> writer) |
List<FileCommittable> |
prepareCommit(boolean blocking,
long commitIdentifier)
Prepare commit in the write.
|
protected List<DataFileMeta> |
scanExistingFileMetas(Long snapshotId,
org.apache.flink.table.data.binary.BinaryRowData partition,
int bucket) |
FileStoreWrite<T> |
withIOManager(org.apache.flink.runtime.io.disk.iomanager.IOManager ioManager) |
void |
withOverwrite(boolean overwrite)
If overwrite is true, the writer will overwrite the store, otherwise it won't.
|
void |
write(org.apache.flink.table.data.binary.BinaryRowData partition,
int bucket,
T data)
Write the data to the store according to the partition and bucket.
|
protected final SnapshotManager snapshotManager
@Nullable protected org.apache.flink.runtime.io.disk.iomanager.IOManager ioManager
protected final Map<org.apache.flink.table.data.binary.BinaryRowData,Map<Integer,AbstractFileStoreWrite.WriterContainer<T>>> writers
protected AbstractFileStoreWrite(String commitUser, SnapshotManager snapshotManager, FileStoreScan scan)
public FileStoreWrite<T> withIOManager(org.apache.flink.runtime.io.disk.iomanager.IOManager ioManager)
withIOManager
in interface FileStoreWrite<T>
protected List<DataFileMeta> scanExistingFileMetas(Long snapshotId, org.apache.flink.table.data.binary.BinaryRowData partition, int bucket)
public void withOverwrite(boolean overwrite)
FileStoreWrite
withOverwrite
in interface FileStoreWrite<T>
overwrite
- the overwrite flagpublic void write(org.apache.flink.table.data.binary.BinaryRowData partition, int bucket, T data) throws Exception
FileStoreWrite
write
in interface FileStoreWrite<T>
partition
- the partition of the databucket
- the bucket id of the datadata
- the given dataException
- the thrown exception when writing the recordpublic void compact(org.apache.flink.table.data.binary.BinaryRowData partition, int bucket, boolean fullCompaction) throws Exception
FileStoreWrite
compact
in interface FileStoreWrite<T>
partition
- the partition to compactbucket
- the bucket to compactfullCompaction
- whether to trigger full compaction or just normal compactionException
- the thrown exception when compacting the recordspublic void notifyNewFiles(long snapshotId, org.apache.flink.table.data.binary.BinaryRowData partition, int bucket, List<DataFileMeta> files)
FileStoreWrite
Most probably, these files are created by another job. Currently this method is only used by the dedicated compact job to see files created by writer jobs.
notifyNewFiles
in interface FileStoreWrite<T>
snapshotId
- the snapshot id where new files are createdpartition
- the partition where new files are createdbucket
- the bucket where new files are createdfiles
- the new files themselvespublic List<FileCommittable> prepareCommit(boolean blocking, long commitIdentifier) throws Exception
FileStoreWrite
prepareCommit
in interface FileStoreWrite<T>
blocking
- if this method need to wait for current compaction to completecommitIdentifier
- identifier of the commit being preparedException
- the thrown exceptionpublic void close() throws Exception
FileStoreWrite
close
in interface FileStoreWrite<T>
Exception
- the thrown exceptionprotected void notifyNewWriter(RecordWriter<T> writer)
@VisibleForTesting public abstract AbstractFileStoreWrite.WriterContainer<T> createWriterContainer(org.apache.flink.table.data.binary.BinaryRowData partition, int bucket, ExecutorService compactExecutor)
RecordWriter
from partition and bucket.@VisibleForTesting public abstract AbstractFileStoreWrite.WriterContainer<T> createEmptyWriterContainer(org.apache.flink.table.data.binary.BinaryRowData partition, int bucket, ExecutorService compactExecutor)
RecordWriter
from partition and bucket.Copyright © 2019–2023 The Apache Software Foundation. All rights reserved.