Constructor and Description |
---|
DistributedCache(Map<String,Future<Path>> cacheCopyTasks) |
Constructor and Description |
---|
AbstractRuntimeUDFContext(TaskInfo taskInfo,
UserCodeClassLoader userCodeClassLoader,
ExecutionConfig executionConfig,
Map<String,Accumulator<?,?>> accumulators,
Map<String,Future<Path>> cpTasks,
OperatorMetricGroup metrics) |
RuntimeUDFContext(TaskInfo taskInfo,
ClassLoader userCodeClassLoader,
ExecutionConfig executionConfig,
Map<String,Future<Path>> cpTasks,
Map<String,Accumulator<?,?>> accumulators,
OperatorMetricGroup metrics) |
RuntimeUDFContext(TaskInfo taskInfo,
ClassLoader userCodeClassLoader,
ExecutionConfig executionConfig,
Map<String,Future<Path>> cpTasks,
Map<String,Accumulator<?,?>> accumulators,
OperatorMetricGroup metrics,
JobID jobID) |
Modifier and Type | Field and Description |
---|---|
protected Path |
FileInputFormat.filePath
Deprecated.
Please override
FileInputFormat.supportsMultiPaths() and use FileInputFormat.getFilePaths() and FileInputFormat.setFilePaths(Path...) . |
protected Path |
FileOutputFormat.outputFilePath
The path of the file to be written.
|
Modifier and Type | Method and Description |
---|---|
Path |
FileInputFormat.getFilePath()
Deprecated.
Please use getFilePaths() instead.
|
Path[] |
FileInputFormat.getFilePaths()
Returns the paths of all files to be read by the FileInputFormat.
|
Path |
FileOutputFormat.getOutputFilePath() |
Modifier and Type | Method and Description |
---|---|
abstract boolean |
FilePathFilter.filterPath(Path filePath)
Returns
true if the filePath given is to be ignored when processing a
directory, e.g. |
boolean |
FilePathFilter.DefaultFilter.filterPath(Path filePath) |
boolean |
GlobFilePathFilter.filterPath(Path filePath) |
protected FileInputFormat.FileBaseStatistics |
FileInputFormat.getFileStats(FileInputFormat.FileBaseStatistics cachedStats,
Path[] filePaths,
ArrayList<FileStatus> files) |
protected FileInputFormat.FileBaseStatistics |
FileInputFormat.getFileStats(FileInputFormat.FileBaseStatistics cachedStats,
Path filePath,
FileSystem fs,
ArrayList<FileStatus> files) |
void |
FileInputFormat.setFilePath(Path filePath)
Sets a single path of a file to be read.
|
void |
FileInputFormat.setFilePaths(Path... filePaths)
Sets multiple paths of files to be read.
|
void |
FileOutputFormat.setOutputFilePath(Path path) |
Constructor and Description |
---|
DelimitedInputFormat(Path filePath,
Configuration configuration) |
FileInputFormat(Path filePath) |
FileOutputFormat(Path outputPath) |
GenericCsvInputFormat(Path filePath) |
Modifier and Type | Method and Description |
---|---|
Path |
CsvReader.getFilePath() |
Constructor and Description |
---|
CsvInputFormat(Path filePath) |
CsvOutputFormat(Path outputPath)
Creates an instance of CsvOutputFormat.
|
CsvOutputFormat(Path outputPath,
String fieldDelimiter)
Creates an instance of CsvOutputFormat.
|
CsvOutputFormat(Path outputPath,
String recordDelimiter,
String fieldDelimiter)
Creates an instance of CsvOutputFormat.
|
CsvReader(Path filePath,
ExecutionEnvironment executionContext) |
PojoCsvInputFormat(Path filePath,
PojoTypeInfo<OUT> pojoTypeInfo) |
PojoCsvInputFormat(Path filePath,
PojoTypeInfo<OUT> pojoTypeInfo,
boolean[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
PojoTypeInfo<OUT> pojoTypeInfo,
int[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
PojoTypeInfo<OUT> pojoTypeInfo,
String[] fieldNames) |
PojoCsvInputFormat(Path filePath,
PojoTypeInfo<OUT> pojoTypeInfo,
String[] fieldNames,
boolean[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
PojoTypeInfo<OUT> pojoTypeInfo,
String[] fieldNames,
int[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
PojoTypeInfo<OUT> pojoTypeInfo) |
PojoCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
PojoTypeInfo<OUT> pojoTypeInfo,
boolean[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
PojoTypeInfo<OUT> pojoTypeInfo,
int[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
PojoTypeInfo<OUT> pojoTypeInfo,
String[] fieldNames) |
PojoCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
PojoTypeInfo<OUT> pojoTypeInfo,
String[] fieldNames,
boolean[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
PojoTypeInfo<OUT> pojoTypeInfo,
String[] fieldNames,
int[] includedFieldsMask) |
PrimitiveInputFormat(Path filePath,
Class<OT> primitiveClass) |
PrimitiveInputFormat(Path filePath,
String delimiter,
Class<OT> primitiveClass) |
RowCsvInputFormat(Path filePath,
TypeInformation[] fieldTypes) |
RowCsvInputFormat(Path filePath,
TypeInformation[] fieldTypes,
boolean emptyColumnAsNull) |
RowCsvInputFormat(Path filePath,
TypeInformation[] fieldTypes,
int[] selectedFields) |
RowCsvInputFormat(Path filePath,
TypeInformation[] fieldTypes,
String lineDelimiter,
String fieldDelimiter) |
RowCsvInputFormat(Path filePath,
TypeInformation[] fieldTypes,
String lineDelimiter,
String fieldDelimiter,
int[] selectedFields) |
RowCsvInputFormat(Path filePath,
TypeInformation[] fieldTypeInfos,
String lineDelimiter,
String fieldDelimiter,
int[] selectedFields,
boolean emptyColumnAsNull) |
TextInputFormat(Path filePath) |
TextOutputFormat(Path outputPath) |
TextOutputFormat(Path outputPath,
String charset) |
TextValueInputFormat(Path filePath) |
TupleCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
TupleTypeInfoBase<OUT> tupleTypeInfo) |
TupleCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
TupleTypeInfoBase<OUT> tupleTypeInfo,
boolean[] includedFieldsMask) |
TupleCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
TupleTypeInfoBase<OUT> tupleTypeInfo,
int[] includedFieldsMask) |
TupleCsvInputFormat(Path filePath,
TupleTypeInfoBase<OUT> tupleTypeInfo) |
TupleCsvInputFormat(Path filePath,
TupleTypeInfoBase<OUT> tupleTypeInfo,
boolean[] includedFieldsMask) |
TupleCsvInputFormat(Path filePath,
TupleTypeInfoBase<OUT> tupleTypeInfo,
int[] includedFieldsMask) |
Constructor and Description |
---|
ScalaCsvOutputFormat(Path outputPath)
Creates an instance of CsvOutputFormat.
|
ScalaCsvOutputFormat(Path outputPath,
String fieldDelimiter)
Creates an instance of CsvOutputFormat.
|
ScalaCsvOutputFormat(Path outputPath,
String recordDelimiter,
String fieldDelimiter)
Creates an instance of CsvOutputFormat.
|
Modifier and Type | Method and Description |
---|---|
Path |
StateChangeFsUploader.getBasePath() |
Constructor and Description |
---|
DuplicatingStateChangeFsUploader(JobID jobID,
Path basePath,
FileSystem fileSystem,
boolean compression,
int bufferSize,
ChangelogStorageMetricGroup metrics,
TaskChangelogRegistry changelogRegistry,
LocalRecoveryDirectoryProvider localRecoveryDirectoryProvider) |
FsStateChangelogStorage(JobID jobID,
Path basePath,
boolean compression,
int bufferSize,
ChangelogStorageMetricGroup metricGroup,
TaskChangelogRegistry changelogRegistry,
LocalRecoveryConfig localRecoveryConfig) |
StateChangeFsUploader(JobID jobID,
Path basePath,
FileSystem fileSystem,
boolean compression,
int bufferSize,
ChangelogStorageMetricGroup metrics,
TaskChangelogRegistry changelogRegistry) |
StateChangeFsUploader(JobID jobID,
Path basePath,
FileSystem fileSystem,
boolean compression,
int bufferSize,
ChangelogStorageMetricGroup metrics,
TaskChangelogRegistry changelogRegistry,
java.util.function.BiFunction<Path,Long,StreamStateHandle> handleFactory) |
Constructor and Description |
---|
AbstractStateChangeFsUploader(boolean compression,
int bufferSize,
ChangelogStorageMetricGroup metrics,
TaskChangelogRegistry changelogRegistry,
java.util.function.BiFunction<Path,Long,StreamStateHandle> handleFactory) |
StateChangeFsUploader(JobID jobID,
Path basePath,
FileSystem fileSystem,
boolean compression,
int bufferSize,
ChangelogStorageMetricGroup metrics,
TaskChangelogRegistry changelogRegistry,
java.util.function.BiFunction<Path,Long,StreamStateHandle> handleFactory) |
Modifier and Type | Method and Description |
---|---|
Path |
FileSinkCommittable.getCompactedFileToCleanup() |
Modifier and Type | Method and Description |
---|---|
static <IN> FileSink.DefaultBulkFormatBuilder<IN> |
FileSink.forBulkFormat(Path basePath,
BulkWriter.Factory<IN> bulkWriterFactory) |
static <IN> FileSink.DefaultRowFormatBuilder<IN> |
FileSink.forRowFormat(Path basePath,
Encoder<IN> encoder) |
Constructor and Description |
---|
BulkFormatBuilder(Path basePath,
BulkWriter.Factory<IN> writerFactory,
BucketAssigner<IN,String> assigner) |
BulkFormatBuilder(Path basePath,
long bucketCheckInterval,
BulkWriter.Factory<IN> writerFactory,
BucketAssigner<IN,String> assigner,
CheckpointRollingPolicy<IN,String> policy,
FileWriterBucketFactory<IN> bucketFactory,
OutputFileConfig outputFileConfig) |
FileSinkCommittable(String bucketId,
Path compactedFileToCleanup) |
RowFormatBuilder(Path basePath,
Encoder<IN> encoder,
BucketAssigner<IN,String> bucketAssigner) |
RowFormatBuilder(Path basePath,
long bucketCheckInterval,
Encoder<IN> encoder,
BucketAssigner<IN,String> assigner,
RollingPolicy<IN,String> policy,
FileWriterBucketFactory<IN> bucketFactory,
OutputFileConfig outputFileConfig) |
Modifier and Type | Method and Description |
---|---|
InputFormatBasedReader<T> |
InputFormatBasedReader.Factory.createFor(Path path) |
RecordWiseFileCompactor.Reader<T> |
RecordWiseFileCompactor.Reader.Factory.createFor(Path path) |
DecoderBasedReader<T> |
DecoderBasedReader.Factory.createFor(Path path) |
Modifier and Type | Method and Description |
---|---|
void |
IdenticalFileCompactor.compact(List<Path> inputFiles,
OutputStream outputStream) |
void |
OutputStreamBasedFileCompactor.compact(List<Path> inputFiles,
OutputStream outputStream) |
void |
RecordWiseFileCompactor.compact(List<Path> inputFiles,
RecordWiseFileCompactor.Writer<IN> writer) |
protected abstract void |
OutputStreamBasedFileCompactor.doCompact(List<Path> inputFiles,
OutputStream outputStream) |
protected void |
ConcatFileCompactor.doCompact(List<Path> inputFiles,
OutputStream outputStream) |
Constructor and Description |
---|
DecoderBasedReader(Path path,
DecoderBasedReader.Decoder<T> decoder) |
InputFormatBasedReader(Path path,
FileInputFormat<T> inputFormat) |
Modifier and Type | Method and Description |
---|---|
Path |
FileWriterBucketState.getBucketPath() |
Modifier and Type | Method and Description |
---|---|
org.apache.flink.connector.file.sink.writer.FileWriterBucket<IN> |
DefaultFileWriterBucketFactory.getNewBucket(String bucketId,
Path bucketPath,
BucketWriter<IN,String> bucketWriter,
RollingPolicy<IN,String> rollingPolicy,
OutputFileConfig outputFileConfig) |
org.apache.flink.connector.file.sink.writer.FileWriterBucket<IN> |
FileWriterBucketFactory.getNewBucket(String bucketId,
Path bucketPath,
BucketWriter<IN,String> bucketWriter,
RollingPolicy<IN,String> rollingPolicy,
OutputFileConfig outputFileConfig) |
Constructor and Description |
---|
FileWriter(Path basePath,
SinkWriterMetricGroup metricGroup,
BucketAssigner<IN,String> bucketAssigner,
FileWriterBucketFactory<IN> bucketFactory,
BucketWriter<IN,String> bucketWriter,
RollingPolicy<IN,String> rollingPolicy,
OutputFileConfig outputFileConfig,
ProcessingTimeService processingTimeService,
long bucketCheckInterval)
A constructor creating a new empty bucket manager.
|
FileWriterBucketState(String bucketId,
Path bucketPath,
long inProgressFileCreationTime,
InProgressFileWriter.InProgressFileRecoverable inProgressFileRecoverable) |
FileWriterBucketState(String bucketId,
Path bucketPath,
long inProgressFileCreationTime,
InProgressFileWriter.InProgressFileRecoverable inProgressFileRecoverable,
Map<Long,List<InProgressFileWriter.PendingFileRecoverable>> pendingFileRecoverablesPerCheckpoint) |
Modifier and Type | Field and Description |
---|---|
protected Path[] |
AbstractFileSource.AbstractFileSourceBuilder.inputPaths |
Modifier and Type | Method and Description |
---|---|
Path |
FileSourceSplit.path()
Gets the file's path.
|
Modifier and Type | Method and Description |
---|---|
Collection<Path> |
PendingSplitsCheckpoint.getAlreadyProcessedPaths() |
Modifier and Type | Method and Description |
---|---|
static <T> FileSource.FileSourceBuilder<T> |
FileSource.forBulkFileFormat(BulkFormat<T,FileSourceSplit> bulkFormat,
Path... paths)
Builds a new
FileSource using a BulkFormat to read batches of records from
files. |
static <T> FileSource.FileSourceBuilder<T> |
FileSource.forRecordFileFormat(FileRecordFormat<T> recordFormat,
Path... paths)
Deprecated.
Please use
FileSource.forRecordStreamFormat(StreamFormat, Path...) instead. |
static <T> FileSource.FileSourceBuilder<T> |
FileSource.forRecordStreamFormat(StreamFormat<T> streamFormat,
Path... paths)
Builds a new
FileSource using a StreamFormat to read record-by-record from a
file stream. |
Modifier and Type | Method and Description |
---|---|
static <T extends FileSourceSplit> |
PendingSplitsCheckpoint.fromCollectionSnapshot(Collection<T> splits,
Collection<Path> alreadyProcessedPaths) |
Constructor and Description |
---|
AbstractFileSource(Path[] inputPaths,
FileEnumerator.Provider fileEnumerator,
FileSplitAssigner.Provider splitAssigner,
BulkFormat<T,SplitT> readerFormat,
ContinuousEnumerationSettings continuousEnumerationSettings) |
AbstractFileSourceBuilder(Path[] inputPaths,
BulkFormat<T,SplitT> readerFormat,
FileEnumerator.Provider defaultFileEnumerator,
FileSplitAssigner.Provider defaultSplitAssigner) |
FileSourceSplit(String id,
Path filePath,
long offset,
long length)
Deprecated.
You should use
FileSourceSplit(String, Path, long, long, long, long) |
FileSourceSplit(String id,
Path filePath,
long offset,
long length,
long fileModificationTime,
long fileSize)
Constructs a split with host information.
|
FileSourceSplit(String id,
Path filePath,
long offset,
long length,
long fileModificationTime,
long fileSize,
String... hostnames)
Constructs a split with host information.
|
FileSourceSplit(String id,
Path filePath,
long offset,
long length,
long fileModificationTime,
long fileSize,
String[] hostnames,
CheckpointedPosition readerPosition)
Constructs a split with host information.
|
FileSourceSplit(String id,
Path filePath,
long offset,
long length,
String... hostnames)
Deprecated.
|
FileSourceSplit(String id,
Path filePath,
long offset,
long length,
String[] hostnames,
CheckpointedPosition readerPosition)
Deprecated.
|
Constructor and Description |
---|
PendingSplitsCheckpoint(Collection<SplitT> splits,
Collection<Path> alreadyProcessedPaths) |
Modifier and Type | Method and Description |
---|---|
Collection<FileSourceSplit> |
FileEnumerator.enumerateSplits(Path[] paths,
int minDesiredSplits)
Generates all file splits for the relevant files under the given paths.
|
Collection<FileSourceSplit> |
NonSplittingRecursiveEnumerator.enumerateSplits(Path[] paths,
int minDesiredSplits) |
protected boolean |
BlockSplittingRecursiveEnumerator.isFileSplittable(Path filePath) |
boolean |
DefaultFileFilter.test(Path path) |
Constructor and Description |
---|
BlockSplittingRecursiveEnumerator(java.util.function.Predicate<Path> fileFilter,
String[] nonSplittableFileSuffixes)
Creates a new enumerator that uses the given predicate as a filter for file paths, and avoids
splitting files with the given extension (typically to avoid splitting compressed files).
|
NonSplittingRecursiveEnumerator(java.util.function.Predicate<Path> fileFilter)
Creates a NonSplittingRecursiveEnumerator that uses the given predicate as a filter for file
paths.
|
Constructor and Description |
---|
ContinuousFileSplitEnumerator(SplitEnumeratorContext<FileSourceSplit> context,
FileEnumerator enumerator,
FileSplitAssigner splitAssigner,
Path[] paths,
Collection<Path> alreadyDiscoveredPaths,
long discoveryInterval) |
Constructor and Description |
---|
ContinuousFileSplitEnumerator(SplitEnumeratorContext<FileSourceSplit> context,
FileEnumerator enumerator,
FileSplitAssigner splitAssigner,
Path[] paths,
Collection<Path> alreadyDiscoveredPaths,
long discoveryInterval) |
Modifier and Type | Method and Description |
---|---|
FileRecordFormat.Reader<T> |
FileRecordFormat.createReader(Configuration config,
Path filePath,
long splitOffset,
long splitLength)
Deprecated.
Creates a new reader to read in this format.
|
FileRecordFormat.Reader<T> |
FileRecordFormat.restoreReader(Configuration config,
Path filePath,
long restoredOffset,
long splitOffset,
long splitLength)
Deprecated.
Restores a reader from a checkpointed position.
|
Modifier and Type | Method and Description |
---|---|
Path |
PartitionTempFileManager.createPartitionDir(String... partitions)
Generate a new partition directory with partitions.
|
Path |
TableMetaStoreFactory.TableMetaStore.getLocationPath()
Deprecated.
|
Path |
PartitionCommitPolicy.Context.partitionPath()
Path of this partition.
|
Modifier and Type | Method and Description |
---|---|
Optional<Path> |
TableMetaStoreFactory.TableMetaStore.getPartition(LinkedHashMap<String,String> partitionSpec)
Get partition location path for this partition spec.
|
static List<Path> |
PartitionTempFileManager.listTaskTemporaryPaths(FileSystem fs,
Path basePath)
Returns task temporary paths in this checkpoint.
|
Modifier and Type | Method and Description |
---|---|
void |
TableMetaStoreFactory.TableMetaStore.createOrAlterPartition(LinkedHashMap<String,String> partitionSpec,
Path partitionPath)
After data has been inserted into the partition path, the partition may need to be
created (if doesn't exists) or updated.
|
OutputFormat<T> |
OutputFormatFactory.createOutputFormat(Path path)
Create a
OutputFormat with specific path. |
default void |
TableMetaStoreFactory.TableMetaStore.finishWritingTable(Path tablePath)
After data has been inserted into table, some follow-up works related to metastore may
need be done like report statistic to metastore.
|
static List<Path> |
PartitionTempFileManager.listTaskTemporaryPaths(FileSystem fs,
Path basePath)
Returns task temporary paths in this checkpoint.
|
FileSystemOutputFormat.Builder<T> |
FileSystemOutputFormat.Builder.setTempPath(Path tmpPath) |
Modifier and Type | Method and Description |
---|---|
void |
PartitionLoader.loadNonPartition(List<Path> srcDirs)
Load a non-partition files to output path.
|
void |
PartitionLoader.loadPartition(LinkedHashMap<String,String> partSpec,
List<Path> srcDirs)
Load a single partition.
|
Constructor and Description |
---|
EmptyMetaStoreFactory(Path path) |
Modifier and Type | Method and Description |
---|---|
static <T> DataStream<PartitionCommitInfo> |
StreamingSink.compactionWriter(ProviderContext providerContext,
DataStream<T> inputStream,
long bucketCheckInterval,
StreamingFileSink.BucketsBuilder<T,String,? extends StreamingFileSink.BucketsBuilder<T,String,?>> bucketsBuilder,
FileSystemFactory fsFactory,
Path path,
CompactReader.Factory<T> readFactory,
long targetFileSize,
int parallelism)
Create a file writer with compaction operators by input stream.
|
protected void |
StreamingFileWriter.onPartFileOpened(String s,
Path newPath) |
protected abstract void |
AbstractStreamingWriter.onPartFileOpened(String partition,
Path newPath)
Notifies a new file has been opened.
|
static DataStreamSink<?> |
StreamingSink.sink(ProviderContext providerContext,
DataStream<PartitionCommitInfo> writer,
Path locationPath,
ObjectIdentifier identifier,
List<String> partitionKeys,
TableMetaStoreFactory msFactory,
FileSystemFactory fsFactory,
Configuration options)
Create a sink from file writer.
|
Constructor and Description |
---|
PartitionCommitter(Path locationPath,
ObjectIdentifier tableIdentifier,
List<String> partitionKeys,
TableMetaStoreFactory metaStoreFactory,
FileSystemFactory fsFactory,
Configuration conf) |
Modifier and Type | Method and Description |
---|---|
static Path |
CompactOperator.convertFromUncompacted(Path path) |
Path |
CompactMessages.InputFile.getFile() |
Path |
CompactContext.getPath() |
Path |
CompactContext.CompactContextImpl.getPath() |
Modifier and Type | Method and Description |
---|---|
List<Path> |
CompactMessages.CompactionUnit.getPaths() |
Modifier and Type | Method and Description |
---|---|
static Path |
CompactOperator.convertFromUncompacted(Path path) |
static CompactContext |
CompactContext.create(Configuration config,
FileSystem fileSystem,
String partition,
Path path) |
protected void |
CompactFileWriter.onPartFileOpened(String partition,
Path newPath) |
Constructor and Description |
---|
InputFile(String partition,
Path file) |
Constructor and Description |
---|
CompactionUnit(int unitId,
String partition,
List<Path> unit) |
Modifier and Type | Method and Description |
---|---|
Collection<FileSourceSplit> |
HiveSourceFileEnumerator.enumerateSplits(Path[] paths,
int minDesiredSplits) |
Collection<FileSourceSplit> |
HiveSourceDynamicFileEnumerator.enumerateSplits(Path[] paths,
int minDesiredSplits) |
Constructor and Description |
---|
HiveSourceSplit(String id,
Path filePath,
long offset,
long length,
long fileModificationTime,
long fileSize,
String[] hostnames,
CheckpointedPosition readerPosition,
HiveTablePartition hiveTablePartition) |
HiveSourceSplit(String id,
Path filePath,
long offset,
long length,
String[] hostnames,
CheckpointedPosition readerPosition,
HiveTablePartition hiveTablePartition)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
org.apache.flink.connectors.hive.write.HiveOutputFormatFactory.HiveOutputFormat |
HiveOutputFormatFactory.createOutputFormat(Path path) |
Modifier and Type | Method and Description |
---|---|
static Path |
EntropyInjector.addEntropy(FileSystem fs,
Path path)
Handles entropy injection across regular and entropy-aware file systems.
|
static Path |
Path.fromLocalFile(File file)
Creates a path for the given local file.
|
Path |
DuplicatingFileSystem.CopyRequest.getDestination()
The path where to duplicate the source file.
|
Path |
SafetyNetWrapperFileSystem.getHomeDirectory() |
abstract Path |
FileSystem.getHomeDirectory()
Returns the path of the user's home directory in this file system.
|
Path |
LimitedConnectionsFileSystem.getHomeDirectory() |
Path |
Path.getParent()
Returns the parent of a path, i.e., everything that precedes the last separator or
null
if at root. |
Path |
FileInputSplit.getPath()
Returns the path of the file containing this split's data.
|
Path |
FileStatus.getPath()
Returns the corresponding Path to the FileStatus.
|
Path |
DuplicatingFileSystem.CopyRequest.getSource()
The path of the source file to duplicate.
|
Path |
SafetyNetWrapperFileSystem.getWorkingDirectory() |
abstract Path |
FileSystem.getWorkingDirectory()
Returns the path of the file system's current working directory.
|
Path |
LimitedConnectionsFileSystem.getWorkingDirectory() |
Path |
Path.makeQualified(FileSystem fs)
Returns a qualified path object.
|
Path |
OutputStreamAndPath.path() |
static Path |
EntropyInjector.removeEntropyMarkerIfPresent(FileSystem fs,
Path path)
Removes the entropy marker string from the path, if the given file system is an
entropy-injecting file system (implements
EntropyInjectingFileSystem ) and the entropy
marker key is present. |
Path |
Path.suffix(String suffix)
Adds a suffix to the final name in the path.
|
Modifier and Type | Method and Description |
---|---|
static Path |
EntropyInjector.addEntropy(FileSystem fs,
Path path)
Handles entropy injection across regular and entropy-aware file systems.
|
boolean |
DuplicatingFileSystem.canFastDuplicate(Path source,
Path destination)
Tells if we can perform duplicate/copy between given paths.
|
FSDataOutputStream |
FileSystem.create(Path f,
boolean overwrite)
Deprecated.
Use
FileSystem.create(Path, WriteMode) instead. |
FSDataOutputStream |
SafetyNetWrapperFileSystem.create(Path f,
boolean overwrite,
int bufferSize,
short replication,
long blockSize) |
FSDataOutputStream |
FileSystem.create(Path f,
boolean overwrite,
int bufferSize,
short replication,
long blockSize)
Deprecated.
Deprecated because not well supported across types of file systems. Control the
behavior of specific file systems via configurations instead.
|
FSDataOutputStream |
LimitedConnectionsFileSystem.create(Path f,
boolean overwrite,
int bufferSize,
short replication,
long blockSize)
Deprecated.
|
FSDataOutputStream |
SafetyNetWrapperFileSystem.create(Path f,
FileSystem.WriteMode overwrite) |
abstract FSDataOutputStream |
FileSystem.create(Path f,
FileSystem.WriteMode overwriteMode)
Opens an FSDataOutputStream to a new file at the given path.
|
FSDataOutputStream |
LimitedConnectionsFileSystem.create(Path f,
FileSystem.WriteMode overwriteMode) |
static OutputStreamAndPath |
EntropyInjector.createEntropyAware(FileSystem fs,
Path path,
FileSystem.WriteMode writeMode)
Handles entropy injection across regular and entropy-aware file systems.
|
boolean |
SafetyNetWrapperFileSystem.delete(Path f,
boolean recursive) |
abstract boolean |
FileSystem.delete(Path f,
boolean recursive)
Delete a file.
|
boolean |
LimitedConnectionsFileSystem.delete(Path f,
boolean recursive) |
boolean |
SafetyNetWrapperFileSystem.exists(Path f) |
boolean |
FileSystem.exists(Path f)
Check if exists.
|
boolean |
LimitedConnectionsFileSystem.exists(Path f) |
FileStatus |
SafetyNetWrapperFileSystem.getFileStatus(Path f) |
abstract FileStatus |
FileSystem.getFileStatus(Path f)
Return a file status object that represents the path.
|
FileStatus |
LimitedConnectionsFileSystem.getFileStatus(Path f) |
boolean |
SafetyNetWrapperFileSystem.initOutPathDistFS(Path outPath,
FileSystem.WriteMode writeMode,
boolean createDirectory) |
boolean |
FileSystem.initOutPathDistFS(Path outPath,
FileSystem.WriteMode writeMode,
boolean createDirectory)
Initializes output directories on distributed file systems according to the given write mode.
|
boolean |
SafetyNetWrapperFileSystem.initOutPathLocalFS(Path outPath,
FileSystem.WriteMode writeMode,
boolean createDirectory) |
boolean |
FileSystem.initOutPathLocalFS(Path outPath,
FileSystem.WriteMode writeMode,
boolean createDirectory)
Initializes output directories on local file systems according to the given write mode.
|
static boolean |
EntropyInjector.isEntropyInjecting(FileSystem fs,
Path target) |
FileStatus[] |
SafetyNetWrapperFileSystem.listStatus(Path f) |
abstract FileStatus[] |
FileSystem.listStatus(Path f)
List the statuses of the files/directories in the given path if the path is a directory.
|
FileStatus[] |
LimitedConnectionsFileSystem.listStatus(Path f) |
boolean |
SafetyNetWrapperFileSystem.mkdirs(Path f) |
abstract boolean |
FileSystem.mkdirs(Path f)
Make the given file and all non-existent parents into directories.
|
boolean |
LimitedConnectionsFileSystem.mkdirs(Path f) |
static DuplicatingFileSystem.CopyRequest |
DuplicatingFileSystem.CopyRequest.of(Path source,
Path destination)
A factory method for creating a simple pair of source/destination.
|
RecoverableFsDataOutputStream |
RecoverableWriter.open(Path path)
Opens a new recoverable stream to write to the given path.
|
FSDataInputStream |
SafetyNetWrapperFileSystem.open(Path f) |
abstract FSDataInputStream |
FileSystem.open(Path f)
Opens an FSDataInputStream at the indicated Path.
|
FSDataInputStream |
LimitedConnectionsFileSystem.open(Path f) |
FSDataInputStream |
SafetyNetWrapperFileSystem.open(Path f,
int bufferSize) |
abstract FSDataInputStream |
FileSystem.open(Path f,
int bufferSize)
Opens an FSDataInputStream at the indicated Path.
|
FSDataInputStream |
LimitedConnectionsFileSystem.open(Path f,
int bufferSize) |
static Path |
EntropyInjector.removeEntropyMarkerIfPresent(FileSystem fs,
Path path)
Removes the entropy marker string from the path, if the given file system is an
entropy-injecting file system (implements
EntropyInjectingFileSystem ) and the entropy
marker key is present. |
boolean |
SafetyNetWrapperFileSystem.rename(Path src,
Path dst) |
abstract boolean |
FileSystem.rename(Path src,
Path dst)
Renames the file/directory src to dst.
|
boolean |
LimitedConnectionsFileSystem.rename(Path src,
Path dst) |
Constructor and Description |
---|
FileInputSplit(int num,
Path file,
long start,
long length,
String[] hosts)
Constructs a split with host information.
|
OutputStreamAndPath(FSDataOutputStream stream,
Path path)
Creates a OutputStreamAndPath.
|
Path(Path parent,
Path child)
Resolve a child path against a parent path.
|
Path(Path parent,
String child)
Resolve a child path against a parent path.
|
Path(String parent,
Path child)
Resolve a child path against a parent path.
|
Modifier and Type | Method and Description |
---|---|
Path |
LocalFileSystem.getHomeDirectory() |
Path |
LocalFileStatus.getPath() |
Path |
LocalFileSystem.getWorkingDirectory() |
Modifier and Type | Method and Description |
---|---|
FSDataOutputStream |
LocalFileSystem.create(Path filePath,
FileSystem.WriteMode overwrite) |
boolean |
LocalFileSystem.delete(Path f,
boolean recursive) |
boolean |
LocalFileSystem.exists(Path f) |
FileStatus |
LocalFileSystem.getFileStatus(Path f) |
FileStatus[] |
LocalFileSystem.listStatus(Path f) |
boolean |
LocalFileSystem.mkdirs(Path f)
Recursively creates the directory specified by the provided path.
|
RecoverableFsDataOutputStream |
LocalRecoverableWriter.open(Path filePath) |
FSDataInputStream |
LocalFileSystem.open(Path f) |
FSDataInputStream |
LocalFileSystem.open(Path f,
int bufferSize) |
File |
LocalFileSystem.pathToFile(Path path)
Converts the given Path to a File for this file system.
|
boolean |
LocalFileSystem.rename(Path src,
Path dst) |
Modifier and Type | Method and Description |
---|---|
Path |
FileCopyTask.getPath() |
Constructor and Description |
---|
FileCopyTask(Path path,
String relativePath) |
Constructor and Description |
---|
AvroInputFormat(Path filePath,
Class<E> type) |
AvroOutputFormat(Path filePath,
Class<E> type) |
Modifier and Type | Method and Description |
---|---|
static RowCsvInputFormat.Builder |
RowCsvInputFormat.builder(TypeInformation<Row> typeInfo,
Path... filePaths)
Create a builder.
|
Modifier and Type | Method and Description |
---|---|
TableStats |
CsvFileFormatFactory.CsvBulkDecodingFormat.reportStatistics(List<Path> files,
DataType producedDataType) |
Constructor and Description |
---|
AbstractCsvInputFormat(Path[] filePaths,
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema csvSchema) |
Modifier and Type | Method and Description |
---|---|
static TableStats |
CsvFormatStatisticsReportUtil.getTableStatistics(List<Path> files) |
Modifier and Type | Method and Description |
---|---|
HadoopPathBasedPartFileWriter<IN,BucketID> |
HadoopPathBasedPartFileWriter.HadoopPathBasedBucketWriter.openNewInProgressFile(BucketID bucketID,
Path flinkPath,
long creationTime) |
Modifier and Type | Method and Description |
---|---|
TableStats |
ParquetFileFormatFactory.ParquetBulkDecodingFormat.reportStatistics(List<Path> files,
DataType producedDataType) |
TableStats |
ParquetColumnarRowInputFormat.reportStatistics(List<Path> files,
DataType producedDataType) |
Modifier and Type | Method and Description |
---|---|
static TableStats |
ParquetFormatStatisticsReportUtil.getTableStatistics(List<Path> files,
DataType producedDataType,
Configuration hadoopConfig,
boolean isUtcTimestamp) |
Modifier and Type | Method and Description |
---|---|
static ParquetColumnarRowSplitReader |
ParquetSplitReaderUtil.genPartColumnarRowReader(boolean utcTimestamp,
boolean caseSensitive,
Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
ParquetColumnarRowSplitReader . |
Modifier and Type | Method and Description |
---|---|
RecoverableFsDataOutputStream |
GSRecoverableWriter.open(Path path) |
Modifier and Type | Method and Description |
---|---|
Path |
OSSAccessor.objectToPath(String object) |
Modifier and Type | Method and Description |
---|---|
String |
OSSAccessor.pathToObject(Path path) |
Modifier and Type | Method and Description |
---|---|
RecoverableFsDataOutputStream |
OSSRecoverableWriter.open(Path path) |
Modifier and Type | Method and Description |
---|---|
RecoverableFsDataOutputStream |
S3RecoverableWriter.open(Path path) |
Modifier and Type | Method and Description |
---|---|
boolean |
FlinkS3PrestoFileSystem.delete(Path path,
boolean recursive) |
Constructor and Description |
---|
GraphCsvReader(Path edgePath,
ExecutionEnvironment context) |
GraphCsvReader(Path edgePath,
MapFunction<K,VV> mapper,
ExecutionEnvironment context) |
GraphCsvReader(Path vertexPath,
Path edgePath,
ExecutionEnvironment context) |
Modifier and Type | Method and Description |
---|---|
static OrcColumnarRowSplitReader<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> |
OrcSplitReaderUtil.genPartColumnarRowReader(String hiveVersion,
Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
OrcColumnarRowSplitReader . |
Modifier and Type | Method and Description |
---|---|
TableStats |
OrcColumnarRowInputFormat.reportStatistics(List<Path> files,
DataType producedDataType) |
TableStats |
OrcFileFormatFactory.OrcBulkDecodingFormat.reportStatistics(List<Path> files,
DataType producedDataType) |
Constructor and Description |
---|
OrcColumnarRowSplitReader(OrcShim<BATCH> shim,
Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
OrcColumnarRowSplitReader.ColumnBatchGenerator<BATCH> batchGenerator,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength) |
OrcSplitReader(OrcShim<BATCH> shim,
Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength) |
Modifier and Type | Method and Description |
---|---|
static OrcColumnarRowSplitReader<org.apache.orc.storage.ql.exec.vector.VectorizedRowBatch> |
OrcNoHiveSplitReaderUtil.genPartColumnarRowReader(Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
OrcColumnarRowSplitReader . |
Modifier and Type | Method and Description |
---|---|
org.apache.orc.RecordReader |
OrcNoHiveShim.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength) |
Modifier and Type | Method and Description |
---|---|
org.apache.orc.RecordReader |
OrcShimV200.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength) |
org.apache.orc.RecordReader |
OrcShim.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength)
Create orc
RecordReader from conf, schema and etc... |
Modifier and Type | Method and Description |
---|---|
static TableStats |
OrcFormatStatisticsReportUtil.getTableStatistics(List<Path> files,
DataType producedDataType,
Configuration hadoopConfig) |
Modifier and Type | Method and Description |
---|---|
PermanentBlobKey |
BlobClient.uploadFile(JobID jobId,
Path file)
Uploads a single file to the
PermanentBlobService of the given BlobServer . |
Modifier and Type | Method and Description |
---|---|
static List<PermanentBlobKey> |
BlobClient.uploadFiles(InetSocketAddress serverAddress,
Configuration clientConfig,
JobID jobId,
List<Path> files)
Uploads the JAR files to the
PermanentBlobService of the BlobServer at the
given address with HA as configured. |
Modifier and Type | Method and Description |
---|---|
static void |
ClientUtils.uploadJobGraphFiles(JobGraph jobGraph,
Collection<Path> userJars,
Collection<Tuple2<String,Path>> userArtifacts,
SupplierWithException<BlobClient,IOException> clientSupplier)
Uploads the given jars and artifacts required for the execution of the given
JobGraph
using the BlobClient from the given Supplier . |
static void |
ClientUtils.uploadJobGraphFiles(JobGraph jobGraph,
Collection<Path> userJars,
Collection<Tuple2<String,Path>> userArtifacts,
SupplierWithException<BlobClient,IOException> clientSupplier)
Uploads the given jars and artifacts required for the execution of the given
JobGraph
using the BlobClient from the given Supplier . |
Modifier and Type | Method and Description |
---|---|
Map<String,Future<Path>> |
Environment.getDistributedCacheEntries() |
Modifier and Type | Method and Description |
---|---|
Future<Path> |
FileCache.createTmpFile(String name,
DistributedCache.DistributedCacheEntry entry,
JobID jobID,
ExecutionAttemptID executionId)
If the file doesn't exists locally, retrieve the file from the blob-service.
|
Modifier and Type | Method and Description |
---|---|
Path |
HadoopFileSystem.getHomeDirectory() |
Path |
HadoopFileStatus.getPath() |
Path |
HadoopFileSystem.getWorkingDirectory() |
Modifier and Type | Method and Description |
---|---|
HadoopDataOutputStream |
HadoopFileSystem.create(Path f,
boolean overwrite,
int bufferSize,
short replication,
long blockSize) |
HadoopDataOutputStream |
HadoopFileSystem.create(Path f,
FileSystem.WriteMode overwrite) |
boolean |
HadoopFileSystem.delete(Path f,
boolean recursive) |
boolean |
HadoopFileSystem.exists(Path f) |
FileStatus |
HadoopFileSystem.getFileStatus(Path f) |
FileStatus[] |
HadoopFileSystem.listStatus(Path f) |
boolean |
HadoopFileSystem.mkdirs(Path f) |
HadoopDataInputStream |
HadoopFileSystem.open(Path f) |
RecoverableFsDataOutputStream |
HadoopRecoverableWriter.open(Path filePath) |
HadoopDataInputStream |
HadoopFileSystem.open(Path f,
int bufferSize) |
boolean |
HadoopFileSystem.rename(Path src,
Path dst) |
static org.apache.hadoop.fs.Path |
HadoopFileSystem.toHadoopPath(Path path) |
Modifier and Type | Method and Description |
---|---|
static Path |
HighAvailabilityServicesUtils.getClusterHighAvailableStoragePath(Configuration configuration)
Gets the cluster high available storage path from the provided configuration.
|
Modifier and Type | Method and Description |
---|---|
static Path |
FsJobArchivist.archiveJob(Path rootPath,
JobID jobId,
Collection<ArchivedJson> jsonToArchive)
Writes the given
AccessExecutionGraph to the FileSystem pointed to by JobManagerOptions.ARCHIVE_DIR . |
Modifier and Type | Method and Description |
---|---|
static Path |
FsJobArchivist.archiveJob(Path rootPath,
JobID jobId,
Collection<ArchivedJson> jsonToArchive)
Writes the given
AccessExecutionGraph to the FileSystem pointed to by JobManagerOptions.ARCHIVE_DIR . |
static Collection<ArchivedJson> |
FsJobArchivist.getArchivedJsons(Path file)
Reads the given archive file and returns a
Collection of contained ArchivedJson . |
Modifier and Type | Method and Description |
---|---|
List<Path> |
JobGraph.getUserJars()
Gets the list of assigned user jar paths.
|
Modifier and Type | Method and Description |
---|---|
void |
JobGraph.addJar(Path jar)
Adds the path of a JAR file required to run the job on a task manager.
|
Constructor and Description |
---|
DistributedRuntimeUDFContext(TaskInfo taskInfo,
UserCodeClassLoader userCodeClassLoader,
ExecutionConfig executionConfig,
Map<String,Future<Path>> cpTasks,
Map<String,Accumulator<?,?>> accumulators,
OperatorMetricGroup metrics,
ExternalResourceInfoProvider externalResourceInfoProvider,
JobID jobID) |
Constructor and Description |
---|
FileSystemStateStorageHelper(Path rootPath,
String prefix) |
Modifier and Type | Method and Description |
---|---|
static Path |
ChangelogTaskLocalStateStore.getLocalTaskOwnedDirectory(LocalRecoveryDirectoryProvider provider,
JobID jobID) |
Modifier and Type | Method and Description |
---|---|
static CheckpointStorage |
CheckpointStorageLoader.load(CheckpointStorage fromApplication,
Path defaultSavepointDirectory,
StateBackend configuredStateBackend,
Configuration config,
ClassLoader classLoader,
org.slf4j.Logger logger)
Loads the configured
CheckpointStorage for the job based on the following precedent
rules: |
Constructor and Description |
---|
RetrievableStreamStateHandle(Path filePath,
long stateSize) |
Modifier and Type | Method and Description |
---|---|
protected static Path |
AbstractFsCheckpointStorageAccess.createCheckpointDirectory(Path baseDirectory,
long checkpointId)
Creates the directory path for the data exclusive to a specific checkpoint.
|
static Path |
AbstractFsCheckpointStorageAccess.decodePathFromReference(CheckpointStorageLocationReference reference)
Decodes the given reference into a path.
|
Path |
FsStateBackend.getBasePath()
Deprecated.
Deprecated in favor of
FsStateBackend.getCheckpointPath() . |
Path |
FsCheckpointStorageLocation.getCheckpointDirectory() |
protected static Path |
AbstractFsCheckpointStorageAccess.getCheckpointDirectoryForJob(Path baseCheckpointPath,
JobID jobId)
Builds directory into which a specific job checkpoints, meaning the directory inside which it
creates the checkpoint-specific subdirectories.
|
Path |
AbstractFileStateBackend.getCheckpointPath()
Deprecated.
Gets the checkpoint base directory.
|
Path |
FsStateBackend.getCheckpointPath()
Deprecated.
Gets the base directory where all the checkpoints are stored.
|
Path |
AbstractFsCheckpointStorageAccess.getDefaultSavepointDirectory()
Gets the default directory for savepoints.
|
Path |
FsCompletedCheckpointStorageLocation.getExclusiveCheckpointDir() |
Path |
FileStateHandle.getFilePath()
Gets the path where this handle's state is stored.
|
Path |
FsCheckpointStorageLocation.getMetadataFilePath() |
Path |
AbstractFileStateBackend.getSavepointPath()
Deprecated.
Gets the directory where savepoints are stored by default (when no custom path is given to
the savepoint trigger command).
|
Path |
FsCheckpointStorageLocation.getSharedStateDirectory() |
Path |
FsCheckpointStorageLocation.getTaskOwnedStateDirectory() |
Modifier and Type | Method and Description |
---|---|
protected static Path |
AbstractFsCheckpointStorageAccess.createCheckpointDirectory(Path baseDirectory,
long checkpointId)
Creates the directory path for the data exclusive to a specific checkpoint.
|
protected abstract CheckpointStorageLocation |
AbstractFsCheckpointStorageAccess.createSavepointLocation(FileSystem fs,
Path location) |
protected CheckpointStorageLocation |
FsCheckpointStorageAccess.createSavepointLocation(FileSystem fs,
Path location) |
static CheckpointStorageLocationReference |
AbstractFsCheckpointStorageAccess.encodePathAsReference(Path path)
Encodes the given path as a reference in bytes.
|
protected static Path |
AbstractFsCheckpointStorageAccess.getCheckpointDirectoryForJob(Path baseCheckpointPath,
JobID jobId)
Builds directory into which a specific job checkpoints, meaning the directory inside which it
creates the checkpoint-specific subdirectories.
|
Constructor and Description |
---|
AbstractFileStateBackend(Path baseCheckpointPath,
Path baseSavepointPath)
Deprecated.
Creates a backend with the given optional checkpoint- and savepoint base directories.
|
AbstractFileStateBackend(Path baseCheckpointPath,
Path baseSavepointPath,
ReadableConfig configuration)
Deprecated.
Creates a new backend using the given checkpoint-/savepoint directories, or the values
defined in the given configuration.
|
AbstractFsCheckpointStorageAccess(JobID jobId,
Path defaultSavepointDirectory)
Creates a new checkpoint storage.
|
FileBasedStateOutputStream(FileSystem fileSystem,
Path path) |
FileStateHandle(Path filePath,
long stateSize)
Creates a new file state for the given file path.
|
FsCheckpointMetadataOutputStream(FileSystem fileSystem,
Path metadataFilePath,
Path exclusiveCheckpointDir) |
FsCheckpointStateOutputStream(Path basePath,
FileSystem fs,
int bufferSize,
int localStateThreshold) |
FsCheckpointStateOutputStream(Path basePath,
FileSystem fs,
int bufferSize,
int localStateThreshold,
boolean allowRelativePaths) |
FsCheckpointStateToolset(Path basePath,
DuplicatingFileSystem fs) |
FsCheckpointStorageAccess(FileSystem fs,
Path checkpointBaseDirectory,
Path defaultSavepointDirectory,
JobID jobId,
int fileSizeThreshold,
int writeBufferSize) |
FsCheckpointStorageAccess(Path checkpointBaseDirectory,
Path defaultSavepointDirectory,
JobID jobId,
int fileSizeThreshold,
int writeBufferSize) |
FsCheckpointStorageLocation(FileSystem fileSystem,
Path checkpointDir,
Path sharedStateDir,
Path taskOwnedStateDir,
CheckpointStorageLocationReference reference,
int fileStateSizeThreshold,
int writeBufferSize) |
FsCheckpointStreamFactory(FileSystem fileSystem,
Path checkpointDirectory,
Path sharedStateDirectory,
int fileStateSizeThreshold,
int writeBufferSize)
Creates a new stream factory that stores its checkpoint data in the file system and location
defined by the given Path.
|
FsCompletedCheckpointStorageLocation(FileSystem fs,
Path exclusiveCheckpointDir,
FileStateHandle metadataFileHandle,
String externalPointer) |
FSDataOutputStreamWrapper(FileSystem fileSystem,
Path metadataFilePath) |
FsStateBackend(Path checkpointDataUri)
Deprecated.
Creates a new state backend that stores its checkpoint data in the file system and location
defined by the given URI.
|
FsStateBackend(Path checkpointDataUri,
boolean asynchronousSnapshots)
Deprecated.
Creates a new state backend that stores its checkpoint data in the file system and location
defined by the given URI.
|
RelativeFileStateHandle(Path path,
String relativePath,
long stateSize) |
Modifier and Type | Method and Description |
---|---|
protected CheckpointStorageLocation |
MemoryBackendCheckpointStorageAccess.createSavepointLocation(FileSystem fs,
Path location) |
Constructor and Description |
---|
MemoryBackendCheckpointStorageAccess(JobID jobId,
Path checkpointsBaseDirectory,
Path defaultSavepointLocation,
int maxStateSize)
Creates a new MemoryBackendCheckpointStorage.
|
PersistentMetadataCheckpointStorageLocation(FileSystem fileSystem,
Path checkpointDir,
int maxStateSize)
Creates a checkpoint storage persists metadata to a file system and stores state in line in
state handles with the metadata.
|
Modifier and Type | Method and Description |
---|---|
Path |
FileSystemCheckpointStorage.getCheckpointPath()
Gets the base directory where all the checkpoints are stored.
|
Path |
JobManagerCheckpointStorage.getCheckpointPath() |
Path |
FileSystemCheckpointStorage.getSavepointPath() |
Path |
JobManagerCheckpointStorage.getSavepointPath() |
Constructor and Description |
---|
FileSystemCheckpointStorage(Path checkpointDirectory)
Creates a new checkpoint storage that stores its checkpoint data in the file system and
location defined by the given URI.
|
FileSystemCheckpointStorage(Path checkpointDirectory,
int fileStateSizeThreshold,
int writeBufferSize)
Creates a new checkpoint storage that stores its checkpoint data in the file system and
location defined by the given URI.
|
JobManagerCheckpointStorage(Path checkpointPath,
int maxStateSize)
Creates a new JobManagerCheckpointStorage, setting optionally the paths to persist checkpoint
metadata and savepoints to, as well as configuring state thresholds and asynchronous
operations.
|
Modifier and Type | Method and Description |
---|---|
Map<String,Future<Path>> |
RuntimeEnvironment.getDistributedCacheEntries() |
Constructor and Description |
---|
RuntimeEnvironment(JobID jobId,
JobVertexID jobVertexId,
ExecutionAttemptID executionId,
ExecutionConfig executionConfig,
TaskInfo taskInfo,
Configuration jobConfiguration,
Configuration taskConfiguration,
UserCodeClassLoader userCodeClassLoader,
MemoryManager memManager,
IOManager ioManager,
BroadcastVariableManager bcVarManager,
TaskStateManager taskStateManager,
GlobalAggregateManager aggregateManager,
AccumulatorRegistry accumulatorRegistry,
TaskKvStateRegistry kvStateRegistry,
InputSplitProvider splitProvider,
Map<String,Future<Path>> distCacheEntries,
ResultPartitionWriter[] writers,
IndexedInputGate[] inputGates,
TaskEventDispatcher taskEventDispatcher,
CheckpointResponder checkpointResponder,
TaskOperatorEventGateway operatorEventGateway,
TaskManagerRuntimeInfo taskManagerInfo,
TaskMetricGroup metrics,
Task containingTask,
ExternalResourceInfoProvider externalResourceInfoProvider) |
Modifier and Type | Method and Description |
---|---|
StreamOperator<TaggedOperatorSubtaskState> |
SavepointWriterOperatorFactory.createOperator(long savepointTimestamp,
Path savepointPath)
Creates a
StreamOperator to be used for generating and snapshotting state. |
Modifier and Type | Method and Description |
---|---|
static <OUT,OP extends StreamOperator<OUT>> |
SnapshotUtils.snapshot(OP operator,
int index,
long timestamp,
boolean isExactlyOnceMode,
boolean isUnalignedCheckpoint,
Configuration configuration,
Path savepointPath) |
static <OUT,OP extends StreamOperator<OUT>> |
SnapshotUtils.snapshot(OP operator,
int index,
long timestamp,
boolean isExactlyOnceMode,
boolean isUnalignedCheckpoint,
Configuration configuration,
Path savepointPath,
SavepointFormatType savepointFormatType) |
void |
FileCopyFunction.writeRecord(Path sourcePath) |
Modifier and Type | Method and Description |
---|---|
void |
StatePathExtractor.flatMap(OperatorState operatorState,
Collector<Path> out) |
Constructor and Description |
---|
SavepointOutputFormat(Path savepointPath) |
Constructor and Description |
---|
BroadcastStateBootstrapOperator(long timestamp,
Path savepointPath,
BroadcastStateBootstrapFunction<IN> function) |
KeyedStateBootstrapOperator(long timestamp,
Path savepointPath,
KeyedStateBootstrapFunction<K,IN> function) |
StateBootstrapOperator(long timestamp,
Path savepointPath,
StateBootstrapFunction<IN> function) |
StateBootstrapWrapperOperator(long timestamp,
Path savepointPath,
OP operator) |
Modifier and Type | Method and Description |
---|---|
Map<String,Future<Path>> |
SavepointEnvironment.getDistributedCacheEntries() |
Modifier and Type | Method and Description |
---|---|
Path |
StreamExecutionEnvironment.getDefaultSavepointDirectory()
Gets the default savepoint directory for this Job.
|
Modifier and Type | Method and Description |
---|---|
void |
CheckpointConfig.setCheckpointStorage(Path checkpointDirectory)
Configures the application to write out checkpoint snapshots to the configured directory.
|
StreamExecutionEnvironment |
StreamExecutionEnvironment.setDefaultSavepointDirectory(Path savepointDirectory)
Sets the default savepoint directory, where savepoints will be written to if no is explicitly
provided when triggered.
|
Modifier and Type | Method and Description |
---|---|
Path |
Bucket.getBucketPath() |
Path |
OutputStreamBasedPartFileWriter.OutputStreamBasedPendingFileRecoverable.getPath() |
Path |
OutputStreamBasedPartFileWriter.OutputStreamBasedInProgressFileRecoverable.getPath() |
Path |
InProgressFileWriter.PendingFileRecoverable.getPath() |
Modifier and Type | Method and Description |
---|---|
static <IN> StreamingFileSink.DefaultBulkFormatBuilder<IN> |
StreamingFileSink.forBulkFormat(Path basePath,
BulkWriter.Factory<IN> writerFactory)
Deprecated.
Creates the builder for a
StreamingFileSink with bulk-encoding format. |
static <IN> StreamingFileSink.DefaultRowFormatBuilder<IN> |
StreamingFileSink.forRowFormat(Path basePath,
Encoder<IN> encoder)
Deprecated.
Creates the builder for a
StreamingFileSink with row-encoding format. |
Bucket<IN,BucketID> |
DefaultBucketFactoryImpl.getNewBucket(int subtaskIndex,
BucketID bucketId,
Path bucketPath,
long initialPartCounter,
BucketWriter<IN,BucketID> bucketWriter,
RollingPolicy<IN,BucketID> rollingPolicy,
FileLifeCycleListener<BucketID> fileListener,
OutputFileConfig outputFileConfig) |
Bucket<IN,BucketID> |
BucketFactory.getNewBucket(int subtaskIndex,
BucketID bucketId,
Path bucketPath,
long initialPartCounter,
BucketWriter<IN,BucketID> bucketWriter,
RollingPolicy<IN,BucketID> rollingPolicy,
FileLifeCycleListener<BucketID> fileListener,
OutputFileConfig outputFileConfig) |
void |
FileLifeCycleListener.onPartFileOpened(BucketID bucketID,
Path newPath)
Notifies a new file has been opened.
|
InProgressFileWriter<IN,BucketID> |
BulkBucketWriter.openNew(BucketID bucketId,
RecoverableFsDataOutputStream stream,
Path path,
long creationTime) |
InProgressFileWriter<IN,BucketID> |
RowWiseBucketWriter.openNew(BucketID bucketId,
RecoverableFsDataOutputStream stream,
Path path,
long creationTime) |
default CompactingFileWriter |
BucketWriter.openNewCompactingFile(CompactingFileWriter.Type type,
BucketID bucketID,
Path path,
long creationTime)
Used to create a new
CompactingFileWriter of the requesting type. |
InProgressFileWriter<IN,BucketID> |
BucketWriter.openNewInProgressFile(BucketID bucketID,
Path path,
long creationTime)
Used to create a new
InProgressFileWriter . |
InProgressFileWriter<IN,BucketID> |
BulkBucketWriter.resumeFrom(BucketID bucketId,
RecoverableFsDataOutputStream stream,
Path path,
RecoverableWriter.ResumeRecoverable resumable,
long creationTime) |
InProgressFileWriter<IN,BucketID> |
RowWiseBucketWriter.resumeFrom(BucketID bucketId,
RecoverableFsDataOutputStream stream,
Path path,
RecoverableWriter.ResumeRecoverable resumable,
long creationTime) |
Constructor and Description |
---|
TimestampedFileInputSplit(long modificationTime,
int num,
Path file,
long start,
long length,
String[] hosts)
Creates a
TimestampedFileInputSplit based on the file modification time and the rest
of the information of the FileInputSplit , as returned by the underlying filesystem. |
Modifier and Type | Method and Description |
---|---|
Path |
StreamConfig.getSavepointDir(ClassLoader cl) |
Path |
StreamGraph.getSavepointDirectory() |
Modifier and Type | Method and Description |
---|---|
StreamGraphGenerator |
StreamGraphGenerator.setSavepointDir(Path savepointDir) |
void |
StreamConfig.setSavepointDir(Path directory) |
void |
StreamGraph.setSavepointDirectory(Path savepointDir) |
Modifier and Type | Method and Description |
---|---|
Optional<Path> |
CLI.getOutput() |
Modifier and Type | Method and Description |
---|---|
static void |
TestStreamEnvironment.setAsContext(MiniCluster miniCluster,
int parallelism,
Collection<Path> jarFiles,
Collection<URL> classpaths)
Sets the streaming context environment to a TestStreamEnvironment that runs its programs on
the given cluster with the given default parallelism and the specified jar files and class
paths.
|
Constructor and Description |
---|
TestStreamEnvironment(MiniCluster miniCluster,
Configuration config,
int parallelism,
Collection<Path> jarFiles,
Collection<URL> classPaths) |
Modifier and Type | Method and Description |
---|---|
TableStats |
FileBasedStatisticsReportableInputFormat.reportStatistics(List<Path> files,
DataType producedDataType)
Returns the estimated statistics of this input format.
|
Modifier and Type | Method and Description |
---|---|
protected void |
ResourceManager.checkJarPath(Path path) |
protected URL |
ResourceManager.getURLFromPath(Path path) |
Modifier and Type | Method and Description |
---|---|
static List<Tuple2<LinkedHashMap<String,String>,Path>> |
PartitionPathUtils.searchPartSpecAndPaths(FileSystem fs,
Path path,
int partitionNumber)
Search all partitions in this path.
|
Modifier and Type | Method and Description |
---|---|
static LinkedHashMap<String,String> |
PartitionPathUtils.extractPartitionSpecFromPath(Path currPath)
Make partition spec from path.
|
static List<String> |
PartitionPathUtils.extractPartitionValues(Path currPath)
Make partition values from path.
|
static GenericRowData |
PartitionPathUtils.fillPartitionValueForRecord(String[] fieldNames,
DataType[] fieldTypes,
int[] selectFields,
List<String> partitionKeys,
Path path,
String defaultPartValue)
Extract partition value from path and fill to record.
|
static FileStatus[] |
PartitionPathUtils.listStatusWithoutHidden(FileSystem fs,
Path dir)
List file status without hidden files.
|
static List<Tuple2<LinkedHashMap<String,String>,Path>> |
PartitionPathUtils.searchPartSpecAndPaths(FileSystem fs,
Path path,
int partitionNumber)
Search all partitions in this path.
|
Modifier and Type | Method and Description |
---|---|
static void |
TestEnvironment.setAsContext(MiniCluster miniCluster,
int parallelism,
Collection<Path> jarFiles,
Collection<URL> classPaths)
Sets the current
ExecutionEnvironment to be a TestEnvironment . |
static Configuration |
MiniClusterPipelineExecutorServiceLoader.updateConfigurationForMiniCluster(Configuration config,
Collection<Path> jarFiles,
Collection<URL> classPaths)
Populates a
Configuration that is compatible with this MiniClusterPipelineExecutorServiceLoader . |
Constructor and Description |
---|
TestEnvironment(MiniCluster miniCluster,
int parallelism,
boolean isObjectReuseEnabled,
Collection<Path> jarFiles,
Collection<URL> classPaths) |
Modifier and Type | Method and Description |
---|---|
static Path |
FileUtils.absolutizePath(Path pathToAbsolutize)
Absolutize the given path if it is relative.
|
static Path |
FileUtils.compressDirectory(Path directory,
Path target) |
static Path |
FileUtils.expandDirectory(Path file,
Path targetDirectory) |
Modifier and Type | Method and Description |
---|---|
static Path |
FileUtils.absolutizePath(Path pathToAbsolutize)
Absolutize the given path if it is relative.
|
static Path |
FileUtils.compressDirectory(Path directory,
Path target) |
static void |
FileUtils.copy(Path sourcePath,
Path targetPath,
boolean executable)
Copies all files from source to target and sets executable flag.
|
static Path |
FileUtils.expandDirectory(Path file,
Path targetDirectory) |
Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.