Constructor and Description |
---|
DistributedCache(Map<String,Future<Path>> cacheCopyTasks) |
Constructor and Description |
---|
AbstractRuntimeUDFContext(TaskInfo taskInfo,
ClassLoader userCodeClassLoader,
ExecutionConfig executionConfig,
Map<String,Accumulator<?,?>> accumulators,
Map<String,Future<Path>> cpTasks,
MetricGroup metrics) |
RuntimeUDFContext(TaskInfo taskInfo,
ClassLoader userCodeClassLoader,
ExecutionConfig executionConfig,
Map<String,Future<Path>> cpTasks,
Map<String,Accumulator<?,?>> accumulators,
MetricGroup metrics) |
Modifier and Type | Field and Description |
---|---|
protected Path |
FileInputFormat.filePath
Deprecated.
Please override
FileInputFormat.supportsMultiPaths() and use FileInputFormat.getFilePaths() and FileInputFormat.setFilePaths(Path...) . |
protected Path |
FileOutputFormat.outputFilePath
The path of the file to be written.
|
Modifier and Type | Method and Description |
---|---|
Path |
FileInputFormat.getFilePath()
Deprecated.
Please use getFilePaths() instead.
|
Path[] |
FileInputFormat.getFilePaths()
Returns the paths of all files to be read by the FileInputFormat.
|
Path |
FileOutputFormat.getOutputFilePath() |
Modifier and Type | Method and Description |
---|---|
abstract boolean |
FilePathFilter.filterPath(Path filePath)
Returns
true if the filePath given is to be ignored when processing a
directory, e.g. |
boolean |
FilePathFilter.DefaultFilter.filterPath(Path filePath) |
boolean |
GlobFilePathFilter.filterPath(Path filePath) |
protected FileInputFormat.FileBaseStatistics |
FileInputFormat.getFileStats(FileInputFormat.FileBaseStatistics cachedStats,
Path[] filePaths,
ArrayList<FileStatus> files) |
protected FileInputFormat.FileBaseStatistics |
FileInputFormat.getFileStats(FileInputFormat.FileBaseStatistics cachedStats,
Path filePath,
FileSystem fs,
ArrayList<FileStatus> files) |
void |
FileInputFormat.setFilePath(Path filePath)
Sets a single path of a file to be read.
|
void |
FileInputFormat.setFilePaths(Path... filePaths)
Sets multiple paths of files to be read.
|
void |
FileOutputFormat.setOutputFilePath(Path path) |
Constructor and Description |
---|
DelimitedInputFormat(Path filePath,
Configuration configuration) |
FileInputFormat(Path filePath) |
FileOutputFormat(Path outputPath) |
GenericCsvInputFormat(Path filePath) |
Modifier and Type | Method and Description |
---|---|
Path |
CsvReader.getFilePath() |
Constructor and Description |
---|
CsvInputFormat(Path filePath) |
CsvOutputFormat(Path outputPath)
Creates an instance of CsvOutputFormat.
|
CsvOutputFormat(Path outputPath,
String fieldDelimiter)
Creates an instance of CsvOutputFormat.
|
CsvOutputFormat(Path outputPath,
String recordDelimiter,
String fieldDelimiter)
Creates an instance of CsvOutputFormat.
|
CsvReader(Path filePath,
ExecutionEnvironment executionContext) |
PojoCsvInputFormat(Path filePath,
PojoTypeInfo<OUT> pojoTypeInfo) |
PojoCsvInputFormat(Path filePath,
PojoTypeInfo<OUT> pojoTypeInfo,
boolean[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
PojoTypeInfo<OUT> pojoTypeInfo,
int[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
PojoTypeInfo<OUT> pojoTypeInfo,
String[] fieldNames) |
PojoCsvInputFormat(Path filePath,
PojoTypeInfo<OUT> pojoTypeInfo,
String[] fieldNames,
boolean[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
PojoTypeInfo<OUT> pojoTypeInfo,
String[] fieldNames,
int[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
PojoTypeInfo<OUT> pojoTypeInfo) |
PojoCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
PojoTypeInfo<OUT> pojoTypeInfo,
boolean[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
PojoTypeInfo<OUT> pojoTypeInfo,
int[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
PojoTypeInfo<OUT> pojoTypeInfo,
String[] fieldNames) |
PojoCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
PojoTypeInfo<OUT> pojoTypeInfo,
String[] fieldNames,
boolean[] includedFieldsMask) |
PojoCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
PojoTypeInfo<OUT> pojoTypeInfo,
String[] fieldNames,
int[] includedFieldsMask) |
PrimitiveInputFormat(Path filePath,
Class<OT> primitiveClass) |
PrimitiveInputFormat(Path filePath,
String delimiter,
Class<OT> primitiveClass) |
RowCsvInputFormat(Path filePath,
TypeInformation[] fieldTypes) |
RowCsvInputFormat(Path filePath,
TypeInformation[] fieldTypes,
boolean emptyColumnAsNull) |
RowCsvInputFormat(Path filePath,
TypeInformation[] fieldTypes,
int[] selectedFields) |
RowCsvInputFormat(Path filePath,
TypeInformation[] fieldTypes,
String lineDelimiter,
String fieldDelimiter) |
RowCsvInputFormat(Path filePath,
TypeInformation[] fieldTypes,
String lineDelimiter,
String fieldDelimiter,
int[] selectedFields) |
RowCsvInputFormat(Path filePath,
TypeInformation[] fieldTypeInfos,
String lineDelimiter,
String fieldDelimiter,
int[] selectedFields,
boolean emptyColumnAsNull) |
TextInputFormat(Path filePath) |
TextOutputFormat(Path outputPath) |
TextOutputFormat(Path outputPath,
String charset) |
TextValueInputFormat(Path filePath) |
TupleCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
TupleTypeInfoBase<OUT> tupleTypeInfo) |
TupleCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
TupleTypeInfoBase<OUT> tupleTypeInfo,
boolean[] includedFieldsMask) |
TupleCsvInputFormat(Path filePath,
String lineDelimiter,
String fieldDelimiter,
TupleTypeInfoBase<OUT> tupleTypeInfo,
int[] includedFieldsMask) |
TupleCsvInputFormat(Path filePath,
TupleTypeInfoBase<OUT> tupleTypeInfo) |
TupleCsvInputFormat(Path filePath,
TupleTypeInfoBase<OUT> tupleTypeInfo,
boolean[] includedFieldsMask) |
TupleCsvInputFormat(Path filePath,
TupleTypeInfoBase<OUT> tupleTypeInfo,
int[] includedFieldsMask) |
Constructor and Description |
---|
ScalaCsvOutputFormat(Path outputPath)
Creates an instance of CsvOutputFormat.
|
ScalaCsvOutputFormat(Path outputPath,
String fieldDelimiter)
Creates an instance of CsvOutputFormat.
|
ScalaCsvOutputFormat(Path outputPath,
String recordDelimiter,
String fieldDelimiter)
Creates an instance of CsvOutputFormat.
|
Modifier and Type | Method and Description |
---|---|
org.apache.flink.connectors.hive.write.HiveOutputFormatFactory.HiveOutputFormat |
HiveOutputFormatFactory.createOutputFormat(Path path) |
Modifier and Type | Method and Description |
---|---|
static Path |
Path.fromLocalFile(File file)
Creates a path for the given local file.
|
abstract Path |
FileSystem.getHomeDirectory()
Returns the path of the user's home directory in this file system.
|
Path |
SafetyNetWrapperFileSystem.getHomeDirectory() |
Path |
LimitedConnectionsFileSystem.getHomeDirectory() |
Path |
Path.getParent()
Returns the parent of a path, i.e., everything that precedes the last separator or
null
if at root. |
Path |
FileStatus.getPath()
Returns the corresponding Path to the FileStatus.
|
Path |
FileInputSplit.getPath()
Returns the path of the file containing this split's data.
|
abstract Path |
FileSystem.getWorkingDirectory()
Returns the path of the file system's current working directory.
|
Path |
SafetyNetWrapperFileSystem.getWorkingDirectory() |
Path |
LimitedConnectionsFileSystem.getWorkingDirectory() |
Path |
Path.makeQualified(FileSystem fs)
Returns a qualified path object.
|
Path |
OutputStreamAndPath.path() |
static Path |
EntropyInjector.removeEntropyMarkerIfPresent(FileSystem fs,
Path path)
Removes the entropy marker string from the path, if the given file system is an
entropy-injecting file system (implements
EntropyInjectingFileSystem ) and the entropy
marker key is present. |
Path |
Path.suffix(String suffix)
Adds a suffix to the final name in the path.
|
Modifier and Type | Method and Description |
---|---|
FSDataOutputStream |
FileSystem.create(Path f,
boolean overwrite)
Deprecated.
Use
FileSystem.create(Path, WriteMode) instead. |
FSDataOutputStream |
FileSystem.create(Path f,
boolean overwrite,
int bufferSize,
short replication,
long blockSize)
Deprecated.
Deprecated because not well supported across types of file systems. Control the
behavior of specific file systems via configurations instead.
|
FSDataOutputStream |
SafetyNetWrapperFileSystem.create(Path f,
boolean overwrite,
int bufferSize,
short replication,
long blockSize) |
FSDataOutputStream |
LimitedConnectionsFileSystem.create(Path f,
boolean overwrite,
int bufferSize,
short replication,
long blockSize)
Deprecated.
|
abstract FSDataOutputStream |
FileSystem.create(Path f,
FileSystem.WriteMode overwriteMode)
Opens an FSDataOutputStream to a new file at the given path.
|
FSDataOutputStream |
SafetyNetWrapperFileSystem.create(Path f,
FileSystem.WriteMode overwrite) |
FSDataOutputStream |
LimitedConnectionsFileSystem.create(Path f,
FileSystem.WriteMode overwriteMode) |
static OutputStreamAndPath |
EntropyInjector.createEntropyAware(FileSystem fs,
Path path,
FileSystem.WriteMode writeMode)
Handles entropy injection across regular and entropy-aware file systems.
|
abstract boolean |
FileSystem.delete(Path f,
boolean recursive)
Delete a file.
|
boolean |
SafetyNetWrapperFileSystem.delete(Path f,
boolean recursive) |
boolean |
LimitedConnectionsFileSystem.delete(Path f,
boolean recursive) |
boolean |
FileSystem.exists(Path f)
Check if exists.
|
boolean |
SafetyNetWrapperFileSystem.exists(Path f) |
boolean |
LimitedConnectionsFileSystem.exists(Path f) |
abstract FileStatus |
FileSystem.getFileStatus(Path f)
Return a file status object that represents the path.
|
FileStatus |
SafetyNetWrapperFileSystem.getFileStatus(Path f) |
FileStatus |
LimitedConnectionsFileSystem.getFileStatus(Path f) |
boolean |
FileSystem.initOutPathDistFS(Path outPath,
FileSystem.WriteMode writeMode,
boolean createDirectory)
Initializes output directories on distributed file systems according to the given write mode.
|
boolean |
SafetyNetWrapperFileSystem.initOutPathDistFS(Path outPath,
FileSystem.WriteMode writeMode,
boolean createDirectory) |
boolean |
FileSystem.initOutPathLocalFS(Path outPath,
FileSystem.WriteMode writeMode,
boolean createDirectory)
Initializes output directories on local file systems according to the given write mode.
|
boolean |
SafetyNetWrapperFileSystem.initOutPathLocalFS(Path outPath,
FileSystem.WriteMode writeMode,
boolean createDirectory) |
abstract FileStatus[] |
FileSystem.listStatus(Path f)
List the statuses of the files/directories in the given path if the path is a directory.
|
FileStatus[] |
SafetyNetWrapperFileSystem.listStatus(Path f) |
FileStatus[] |
LimitedConnectionsFileSystem.listStatus(Path f) |
abstract boolean |
FileSystem.mkdirs(Path f)
Make the given file and all non-existent parents into directories.
|
boolean |
SafetyNetWrapperFileSystem.mkdirs(Path f) |
boolean |
LimitedConnectionsFileSystem.mkdirs(Path f) |
RecoverableFsDataOutputStream |
RecoverableWriter.open(Path path)
Opens a new recoverable stream to write to the given path.
|
abstract FSDataInputStream |
FileSystem.open(Path f)
Opens an FSDataInputStream at the indicated Path.
|
FSDataInputStream |
SafetyNetWrapperFileSystem.open(Path f) |
FSDataInputStream |
LimitedConnectionsFileSystem.open(Path f) |
abstract FSDataInputStream |
FileSystem.open(Path f,
int bufferSize)
Opens an FSDataInputStream at the indicated Path.
|
FSDataInputStream |
SafetyNetWrapperFileSystem.open(Path f,
int bufferSize) |
FSDataInputStream |
LimitedConnectionsFileSystem.open(Path f,
int bufferSize) |
static Path |
EntropyInjector.removeEntropyMarkerIfPresent(FileSystem fs,
Path path)
Removes the entropy marker string from the path, if the given file system is an
entropy-injecting file system (implements
EntropyInjectingFileSystem ) and the entropy
marker key is present. |
abstract boolean |
FileSystem.rename(Path src,
Path dst)
Renames the file/directory src to dst.
|
boolean |
SafetyNetWrapperFileSystem.rename(Path src,
Path dst) |
boolean |
LimitedConnectionsFileSystem.rename(Path src,
Path dst) |
Constructor and Description |
---|
FileInputSplit(int num,
Path file,
long start,
long length,
String[] hosts)
Constructs a split with host information.
|
OutputStreamAndPath(FSDataOutputStream stream,
Path path)
Creates a OutputStreamAndPath.
|
Path(Path parent,
Path child)
Resolve a child path against a parent path.
|
Path(Path parent,
String child)
Resolve a child path against a parent path.
|
Path(String parent,
Path child)
Resolve a child path against a parent path.
|
Modifier and Type | Method and Description |
---|---|
Path |
LocalFileSystem.getHomeDirectory() |
Path |
LocalFileStatus.getPath() |
Path |
LocalFileSystem.getWorkingDirectory() |
Modifier and Type | Method and Description |
---|---|
FSDataOutputStream |
LocalFileSystem.create(Path filePath,
FileSystem.WriteMode overwrite) |
boolean |
LocalFileSystem.delete(Path f,
boolean recursive) |
boolean |
LocalFileSystem.exists(Path f) |
FileStatus |
LocalFileSystem.getFileStatus(Path f) |
FileStatus[] |
LocalFileSystem.listStatus(Path f) |
boolean |
LocalFileSystem.mkdirs(Path f)
Recursively creates the directory specified by the provided path.
|
RecoverableFsDataOutputStream |
LocalRecoverableWriter.open(Path filePath) |
FSDataInputStream |
LocalFileSystem.open(Path f) |
FSDataInputStream |
LocalFileSystem.open(Path f,
int bufferSize) |
File |
LocalFileSystem.pathToFile(Path path)
Converts the given Path to a File for this file system.
|
boolean |
LocalFileSystem.rename(Path src,
Path dst) |
Modifier and Type | Method and Description |
---|---|
Path |
FileCopyTask.getPath() |
Constructor and Description |
---|
FileCopyTask(Path path,
String relativePath) |
Constructor and Description |
---|
AvroInputFormat(Path filePath,
Class<E> type) |
AvroOutputFormat(Path filePath,
Class<E> type) |
Modifier and Type | Method and Description |
---|---|
static RowCsvInputFormat.Builder |
RowCsvInputFormat.builder(TypeInformation<Row> typeInfo,
Path... filePaths)
Create a builder.
|
Constructor and Description |
---|
AbstractCsvInputFormat(Path[] filePaths,
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema csvSchema) |
CsvInputFormat(Path[] filePaths,
DataType[] fieldTypes,
String[] fieldNames,
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema csvSchema,
RowType formatRowType,
int[] selectFields,
List<String> partitionKeys,
String defaultPartValue,
long limit,
int[] csvSelectFieldToProjectFieldMapping,
int[] csvSelectFieldToCsvFieldMapping,
boolean ignoreParseErrors) |
Modifier and Type | Method and Description |
---|---|
HadoopPathBasedPartFileWriter<IN,BucketID> |
HadoopPathBasedPartFileWriter.HadoopPathBasedBucketWriter.openNewInProgressFile(BucketID bucketID,
Path flinkPath,
long creationTime) |
Constructor and Description |
---|
JsonInputFormat(Path[] filePaths,
DataType[] fieldTypes,
String[] fieldNames,
int[] selectFields,
List<String> partitionKeys,
String defaultPartValue,
long limit,
int[] jsonSelectFieldToProjectFieldMapping,
int[] jsonSelectFieldToJsonFieldMapping,
JsonRowDataDeserializationSchema deserializationSchema) |
Constructor and Description |
---|
ParquetInputFormat(Path[] paths,
String[] fullFieldNames,
DataType[] fullFieldTypes,
int[] selectedFields,
String partDefaultName,
long limit,
Configuration conf,
boolean utcTimestamp) |
ParquetInputFormat(Path path,
org.apache.parquet.schema.MessageType messageType)
Read parquet files with given parquet file schema.
|
ParquetMapInputFormat(Path path,
org.apache.parquet.schema.MessageType messageType) |
ParquetPojoInputFormat(Path filePath,
org.apache.parquet.schema.MessageType messageType,
PojoTypeInfo<E> pojoTypeInfo) |
ParquetRowInputFormat(Path path,
org.apache.parquet.schema.MessageType messageType) |
Modifier and Type | Method and Description |
---|---|
static ParquetColumnarRowSplitReader |
ParquetSplitReaderUtil.genPartColumnarRowReader(boolean utcTimestamp,
boolean caseSensitive,
Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
ParquetColumnarRowSplitReader . |
Modifier and Type | Method and Description |
---|---|
RecoverableFsDataOutputStream |
S3RecoverableWriter.open(Path path) |
Constructor and Description |
---|
GraphCsvReader(Path edgePath,
ExecutionEnvironment context) |
GraphCsvReader(Path edgePath,
MapFunction<K,VV> mapper,
ExecutionEnvironment context) |
GraphCsvReader(Path vertexPath,
Path edgePath,
ExecutionEnvironment context) |
Modifier and Type | Method and Description |
---|---|
URL |
MesosArtifactServer.addPath(Path path,
Path remoteFile)
Adds a path to the artifact server.
|
void |
MesosArtifactServer.removePath(Path remoteFile) |
scala.Option<URL> |
MesosArtifactServer.resolve(Path remoteFile) |
scala.Option<URL> |
MesosArtifactResolver.resolve(Path remoteFile) |
Constructor and Description |
---|
VirtualFileServerHandler(Path path) |
Modifier and Type | Method and Description |
---|---|
static OrcColumnarRowSplitReader<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> |
OrcSplitReaderUtil.genPartColumnarRowReader(String hiveVersion,
Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
OrcColumnarRowSplitReader . |
Constructor and Description |
---|
OrcColumnarRowSplitReader(OrcShim<BATCH> shim,
Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
OrcColumnarRowSplitReader.ColumnBatchGenerator<BATCH> batchGenerator,
List<OrcSplitReader.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength) |
OrcInputFormat(Path path,
org.apache.orc.TypeDescription orcSchema,
Configuration orcConfig,
int batchSize)
Creates an OrcInputFormat.
|
OrcRowDataInputFormat(Path[] paths,
String[] fullFieldNames,
DataType[] fullFieldTypes,
int[] selectedFields,
String partDefaultName,
long limit,
Properties properties) |
OrcRowSplitReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength) |
OrcSplitReader(OrcShim<BATCH> shim,
Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength) |
Modifier and Type | Method and Description |
---|---|
static OrcColumnarRowSplitReader<org.apache.orc.storage.ql.exec.vector.VectorizedRowBatch> |
OrcNoHiveSplitReaderUtil.genPartColumnarRowReader(Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
OrcColumnarRowSplitReader . |
Modifier and Type | Method and Description |
---|---|
org.apache.orc.RecordReader |
OrcNoHiveShim.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength) |
Modifier and Type | Method and Description |
---|---|
org.apache.orc.RecordReader |
OrcShimV200.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength) |
org.apache.orc.RecordReader |
OrcShim.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength)
Create orc
RecordReader from conf, schema and etc... |
Modifier and Type | Method and Description |
---|---|
PermanentBlobKey |
BlobClient.uploadFile(JobID jobId,
Path file)
Uploads a single file to the
PermanentBlobService of the given BlobServer . |
Modifier and Type | Method and Description |
---|---|
static List<PermanentBlobKey> |
BlobClient.uploadFiles(InetSocketAddress serverAddress,
Configuration clientConfig,
JobID jobId,
List<Path> files)
Uploads the JAR files to the
PermanentBlobService of the BlobServer at the
given address with HA as configured. |
Modifier and Type | Method and Description |
---|---|
static void |
ClientUtils.uploadJobGraphFiles(JobGraph jobGraph,
Collection<Path> userJars,
Collection<Tuple2<String,Path>> userArtifacts,
SupplierWithException<BlobClient,IOException> clientSupplier)
Uploads the given jars and artifacts required for the execution of the given
JobGraph
using the BlobClient from the given Supplier . |
static void |
ClientUtils.uploadJobGraphFiles(JobGraph jobGraph,
Collection<Path> userJars,
Collection<Tuple2<String,Path>> userArtifacts,
SupplierWithException<BlobClient,IOException> clientSupplier)
Uploads the given jars and artifacts required for the execution of the given
JobGraph
using the BlobClient from the given Supplier . |
Modifier and Type | Field and Description |
---|---|
Path |
ContainerSpecification.Artifact.dest |
Path |
ContainerSpecification.Artifact.Builder.dest |
Path |
ContainerSpecification.Artifact.source |
Path |
ContainerSpecification.Artifact.Builder.source |
Modifier and Type | Method and Description |
---|---|
ContainerSpecification.Artifact.Builder |
ContainerSpecification.Artifact.Builder.setDest(Path dest) |
ContainerSpecification.Artifact.Builder |
ContainerSpecification.Artifact.Builder.setSource(Path source) |
Constructor and Description |
---|
Artifact(Path source,
Path dest,
boolean executable,
boolean cachable,
boolean extract) |
Constructor and Description |
---|
KeytabOverlay(Path keytab) |
Krb5ConfOverlay(Path krb5Conf) |
Modifier and Type | Method and Description |
---|---|
Map<String,Future<Path>> |
Environment.getDistributedCacheEntries() |
Modifier and Type | Method and Description |
---|---|
Future<Path> |
FileCache.createTmpFile(String name,
DistributedCache.DistributedCacheEntry entry,
JobID jobID,
ExecutionAttemptID executionId)
If the file doesn't exists locally, retrieve the file from the blob-service.
|
Modifier and Type | Method and Description |
---|---|
Path |
HadoopFileSystem.getHomeDirectory() |
Path |
HadoopFileStatus.getPath() |
Path |
HadoopFileSystem.getWorkingDirectory() |
Modifier and Type | Method and Description |
---|---|
HadoopDataOutputStream |
HadoopFileSystem.create(Path f,
boolean overwrite,
int bufferSize,
short replication,
long blockSize) |
HadoopDataOutputStream |
HadoopFileSystem.create(Path f,
FileSystem.WriteMode overwrite) |
boolean |
HadoopFileSystem.delete(Path f,
boolean recursive) |
boolean |
HadoopFileSystem.exists(Path f) |
FileStatus |
HadoopFileSystem.getFileStatus(Path f) |
FileStatus[] |
HadoopFileSystem.listStatus(Path f) |
boolean |
HadoopFileSystem.mkdirs(Path f) |
RecoverableFsDataOutputStream |
HadoopRecoverableWriter.open(Path filePath) |
HadoopDataInputStream |
HadoopFileSystem.open(Path f) |
HadoopDataInputStream |
HadoopFileSystem.open(Path f,
int bufferSize) |
boolean |
HadoopFileSystem.rename(Path src,
Path dst) |
static org.apache.hadoop.fs.Path |
HadoopFileSystem.toHadoopPath(Path path) |
Modifier and Type | Method and Description |
---|---|
static Path |
HighAvailabilityServicesUtils.getClusterHighAvailableStoragePath(Configuration configuration)
Gets the cluster high available storage path from the provided configuration.
|
Modifier and Type | Method and Description |
---|---|
static Path |
FsJobArchivist.archiveJob(Path rootPath,
JobID jobId,
Collection<ArchivedJson> jsonToArchive)
Writes the given
AccessExecutionGraph to the FileSystem pointed to by JobManagerOptions.ARCHIVE_DIR . |
Modifier and Type | Method and Description |
---|---|
static Path |
FsJobArchivist.archiveJob(Path rootPath,
JobID jobId,
Collection<ArchivedJson> jsonToArchive)
Writes the given
AccessExecutionGraph to the FileSystem pointed to by JobManagerOptions.ARCHIVE_DIR . |
static Collection<ArchivedJson> |
FsJobArchivist.getArchivedJsons(Path file)
Reads the given archive file and returns a
Collection of contained ArchivedJson . |
Modifier and Type | Method and Description |
---|---|
List<Path> |
JobGraph.getUserJars()
Gets the list of assigned user jar paths.
|
Modifier and Type | Method and Description |
---|---|
void |
JobGraph.addJar(Path jar)
Adds the path of a JAR file required to run the job on a task manager.
|
Constructor and Description |
---|
DistributedRuntimeUDFContext(TaskInfo taskInfo,
ClassLoader userCodeClassLoader,
ExecutionConfig executionConfig,
Map<String,Future<Path>> cpTasks,
Map<String,Accumulator<?,?>> accumulators,
MetricGroup metrics,
ExternalResourceInfoProvider externalResourceInfoProvider) |
Constructor and Description |
---|
RetrievableStreamStateHandle(Path filePath,
long stateSize) |
Modifier and Type | Method and Description |
---|---|
protected static Path |
AbstractFsCheckpointStorage.createCheckpointDirectory(Path baseDirectory,
long checkpointId)
Creates the directory path for the data exclusive to a specific checkpoint.
|
static Path |
AbstractFsCheckpointStorage.decodePathFromReference(CheckpointStorageLocationReference reference)
Decodes the given reference into a path.
|
Path |
FsStateBackend.getBasePath()
Deprecated.
Deprecated in favor of
FsStateBackend.getCheckpointPath() . |
Path |
FsCheckpointStorageLocation.getCheckpointDirectory() |
protected static Path |
AbstractFsCheckpointStorage.getCheckpointDirectoryForJob(Path baseCheckpointPath,
JobID jobId)
Builds directory into which a specific job checkpoints, meaning the directory inside which it
creates the checkpoint-specific subdirectories.
|
Path |
AbstractFileStateBackend.getCheckpointPath()
Gets the checkpoint base directory.
|
Path |
FsStateBackend.getCheckpointPath()
Gets the base directory where all the checkpoints are stored.
|
Path |
AbstractFsCheckpointStorage.getDefaultSavepointDirectory()
Gets the default directory for savepoints.
|
Path |
FsCompletedCheckpointStorageLocation.getExclusiveCheckpointDir() |
Path |
FileStateHandle.getFilePath()
Gets the path where this handle's state is stored.
|
Path |
FsCheckpointStorageLocation.getMetadataFilePath() |
Path |
AbstractFileStateBackend.getSavepointPath()
Gets the directory where savepoints are stored by default (when no custom path is given to
the savepoint trigger command).
|
Path |
FsCheckpointStorageLocation.getSharedStateDirectory() |
Path |
FsCheckpointStorageLocation.getTaskOwnedStateDirectory() |
Modifier and Type | Method and Description |
---|---|
protected static Path |
AbstractFsCheckpointStorage.createCheckpointDirectory(Path baseDirectory,
long checkpointId)
Creates the directory path for the data exclusive to a specific checkpoint.
|
protected abstract CheckpointStorageLocation |
AbstractFsCheckpointStorage.createSavepointLocation(FileSystem fs,
Path location) |
protected CheckpointStorageLocation |
FsCheckpointStorage.createSavepointLocation(FileSystem fs,
Path location) |
static CheckpointStorageLocationReference |
AbstractFsCheckpointStorage.encodePathAsReference(Path path)
Encodes the given path as a reference in bytes.
|
protected static Path |
AbstractFsCheckpointStorage.getCheckpointDirectoryForJob(Path baseCheckpointPath,
JobID jobId)
Builds directory into which a specific job checkpoints, meaning the directory inside which it
creates the checkpoint-specific subdirectories.
|
Constructor and Description |
---|
AbstractFileStateBackend(Path baseCheckpointPath,
Path baseSavepointPath)
Creates a backend with the given optional checkpoint- and savepoint base directories.
|
AbstractFileStateBackend(Path baseCheckpointPath,
Path baseSavepointPath,
ReadableConfig configuration)
Creates a new backend using the given checkpoint-/savepoint directories, or the values
defined in the given configuration.
|
AbstractFsCheckpointStorage(JobID jobId,
Path defaultSavepointDirectory)
Creates a new checkpoint storage.
|
FileBasedStateOutputStream(FileSystem fileSystem,
Path path) |
FileStateHandle(Path filePath,
long stateSize)
Creates a new file state for the given file path.
|
FsCheckpointMetadataOutputStream(FileSystem fileSystem,
Path metadataFilePath,
Path exclusiveCheckpointDir) |
FsCheckpointStateOutputStream(Path basePath,
FileSystem fs,
int bufferSize,
int localStateThreshold) |
FsCheckpointStateOutputStream(Path basePath,
FileSystem fs,
int bufferSize,
int localStateThreshold,
boolean allowRelativePaths) |
FsCheckpointStorage(FileSystem fs,
Path checkpointBaseDirectory,
Path defaultSavepointDirectory,
JobID jobId,
int fileSizeThreshold,
int writeBufferSize) |
FsCheckpointStorage(Path checkpointBaseDirectory,
Path defaultSavepointDirectory,
JobID jobId,
int fileSizeThreshold,
int writeBufferSize) |
FsCheckpointStorageLocation(FileSystem fileSystem,
Path checkpointDir,
Path sharedStateDir,
Path taskOwnedStateDir,
CheckpointStorageLocationReference reference,
int fileStateSizeThreshold,
int writeBufferSize) |
FsCheckpointStreamFactory(FileSystem fileSystem,
Path checkpointDirectory,
Path sharedStateDirectory,
int fileStateSizeThreshold,
int writeBufferSize)
Creates a new stream factory that stores its checkpoint data in the file system and location
defined by the given Path.
|
FsCompletedCheckpointStorageLocation(FileSystem fs,
Path exclusiveCheckpointDir,
FileStateHandle metadataFileHandle,
String externalPointer) |
FsStateBackend(Path checkpointDataUri)
Creates a new state backend that stores its checkpoint data in the file system and location
defined by the given URI.
|
FsStateBackend(Path checkpointDataUri,
boolean asynchronousSnapshots)
Creates a new state backend that stores its checkpoint data in the file system and location
defined by the given URI.
|
RelativeFileStateHandle(Path path,
String relativePath,
long stateSize) |
Modifier and Type | Method and Description |
---|---|
protected CheckpointStorageLocation |
MemoryBackendCheckpointStorage.createSavepointLocation(FileSystem fs,
Path location) |
Constructor and Description |
---|
MemoryBackendCheckpointStorage(JobID jobId,
Path checkpointsBaseDirectory,
Path defaultSavepointLocation,
int maxStateSize)
Creates a new MemoryBackendCheckpointStorage.
|
PersistentMetadataCheckpointStorageLocation(FileSystem fileSystem,
Path checkpointDir,
int maxStateSize)
Creates a checkpoint storage persists metadata to a file system and stores state in line in
state handles with the metadata.
|
Modifier and Type | Method and Description |
---|---|
Map<String,Future<Path>> |
RuntimeEnvironment.getDistributedCacheEntries() |
Constructor and Description |
---|
RuntimeEnvironment(JobID jobId,
JobVertexID jobVertexId,
ExecutionAttemptID executionId,
ExecutionConfig executionConfig,
TaskInfo taskInfo,
Configuration jobConfiguration,
Configuration taskConfiguration,
ClassLoader userCodeClassLoader,
MemoryManager memManager,
IOManager ioManager,
BroadcastVariableManager bcVarManager,
TaskStateManager taskStateManager,
GlobalAggregateManager aggregateManager,
AccumulatorRegistry accumulatorRegistry,
TaskKvStateRegistry kvStateRegistry,
InputSplitProvider splitProvider,
Map<String,Future<Path>> distCacheEntries,
ResultPartitionWriter[] writers,
IndexedInputGate[] inputGates,
TaskEventDispatcher taskEventDispatcher,
CheckpointResponder checkpointResponder,
TaskOperatorEventGateway operatorEventGateway,
TaskManagerRuntimeInfo taskManagerInfo,
TaskMetricGroup metrics,
Task containingTask,
ExternalResourceInfoProvider externalResourceInfoProvider) |
Modifier and Type | Method and Description |
---|---|
static Path |
WebMonitorUtils.validateAndNormalizeUri(URI archiveDirUri)
Checks and normalizes the given URI.
|
Constructor and Description |
---|
FileSystemStateStorageHelper(Path rootPath,
String prefix) |
Modifier and Type | Method and Description |
---|---|
StreamOperator<TaggedOperatorSubtaskState> |
SavepointWriterOperatorFactory.createOperator(long savepointTimestamp,
Path savepointPath)
Creates a
StreamOperator to be used for generating and snapshotting state. |
Modifier and Type | Method and Description |
---|---|
static <OUT,OP extends StreamOperator<OUT>> |
SnapshotUtils.snapshot(OP operator,
int index,
long timestamp,
boolean isExactlyOnceMode,
boolean isUnalignedCheckpoint,
CheckpointStorageWorkerView checkpointStorage,
Path savepointPath) |
Constructor and Description |
---|
SavepointOutputFormat(Path savepointPath) |
Constructor and Description |
---|
BroadcastStateBootstrapOperator(long timestamp,
Path savepointPath,
BroadcastStateBootstrapFunction<IN> function) |
KeyedStateBootstrapOperator(long timestamp,
Path savepointPath,
KeyedStateBootstrapFunction<K,IN> function) |
StateBootstrapOperator(long timestamp,
Path savepointPath,
StateBootstrapFunction<IN> function) |
Modifier and Type | Method and Description |
---|---|
Map<String,Future<Path>> |
SavepointEnvironment.getDistributedCacheEntries() |
Modifier and Type | Method and Description |
---|---|
Path |
Bucket.getBucketPath() |
Modifier and Type | Method and Description |
---|---|
static <IN> StreamingFileSink.DefaultBulkFormatBuilder<IN> |
StreamingFileSink.forBulkFormat(Path basePath,
BulkWriter.Factory<IN> writerFactory)
Creates the builder for a
StreamingFileSink with row-encoding format. |
static <IN> StreamingFileSink.DefaultRowFormatBuilder<IN> |
StreamingFileSink.forRowFormat(Path basePath,
Encoder<IN> encoder)
Creates the builder for a
StreamingFileSink with row-encoding format. |
Bucket<IN,BucketID> |
DefaultBucketFactoryImpl.getNewBucket(int subtaskIndex,
BucketID bucketId,
Path bucketPath,
long initialPartCounter,
BucketWriter<IN,BucketID> bucketWriter,
RollingPolicy<IN,BucketID> rollingPolicy,
OutputFileConfig outputFileConfig) |
Bucket<IN,BucketID> |
BucketFactory.getNewBucket(int subtaskIndex,
BucketID bucketId,
Path bucketPath,
long initialPartCounter,
BucketWriter<IN,BucketID> bucketWriter,
RollingPolicy<IN,BucketID> rollingPolicy,
OutputFileConfig outputFileConfig) |
InProgressFileWriter<IN,BucketID> |
BucketWriter.openNewInProgressFile(BucketID bucketID,
Path path,
long creationTime)
Used to create a new
InProgressFileWriter . |
Constructor and Description |
---|
Buckets(Path basePath,
BucketAssigner<IN,BucketID> bucketAssigner,
BucketFactory<IN,BucketID> bucketFactory,
BucketWriter<IN,BucketID> bucketWriter,
RollingPolicy<IN,BucketID> rollingPolicy,
int subtaskIndex,
OutputFileConfig outputFileConfig)
A constructor creating a new empty bucket manager.
|
BulkFormatBuilder(Path basePath,
BulkWriter.Factory<IN> writerFactory,
BucketAssigner<IN,BucketID> assigner) |
BulkFormatBuilder(Path basePath,
BulkWriter.Factory<IN> writerFactory,
BucketAssigner<IN,BucketID> assigner,
CheckpointRollingPolicy<IN,BucketID> policy,
long bucketCheckInterval,
BucketFactory<IN,BucketID> bucketFactory,
OutputFileConfig outputFileConfig) |
RowFormatBuilder(Path basePath,
Encoder<IN> encoder,
BucketAssigner<IN,BucketID> bucketAssigner) |
RowFormatBuilder(Path basePath,
Encoder<IN> encoder,
BucketAssigner<IN,BucketID> assigner,
RollingPolicy<IN,BucketID> policy,
long bucketCheckInterval,
BucketFactory<IN,BucketID> bucketFactory,
OutputFileConfig outputFileConfig) |
Constructor and Description |
---|
TimestampedFileInputSplit(long modificationTime,
int num,
Path file,
long start,
long length,
String[] hosts)
Creates a
TimestampedFileInputSplit based on the file modification time and the rest
of the information of the FileInputSplit , as returned by the underlying filesystem. |
Modifier and Type | Method and Description |
---|---|
static void |
TestStreamEnvironment.setAsContext(JobExecutor jobExecutor,
int parallelism,
Collection<Path> jarFiles,
Collection<URL> classpaths)
Sets the streaming context environment to a TestStreamEnvironment that runs its programs on
the given cluster with the given default parallelism and the specified jar files and class
paths.
|
Constructor and Description |
---|
TestStreamEnvironment(JobExecutor jobExecutor,
int parallelism,
Collection<Path> jarFiles,
Collection<URL> classPaths) |
Modifier and Type | Method and Description |
---|---|
Path[] |
FileSystemFormatFactory.ReaderContext.getPaths()
Read paths.
|
Modifier and Type | Method and Description |
---|---|
Path |
PartitionTempFileManager.createPartitionDir(String... partitions)
Generate a new partition directory with partitions.
|
Path |
TableMetaStoreFactory.TableMetaStore.getLocationPath()
Deprecated.
|
Path |
PartitionCommitPolicy.Context.partitionPath()
Path of this partition.
|
Modifier and Type | Method and Description |
---|---|
Optional<Path> |
TableMetaStoreFactory.TableMetaStore.getPartition(LinkedHashMap<String,String> partitionSpec)
Get partition location path for this partition spec.
|
static List<Path> |
PartitionTempFileManager.listTaskTemporaryPaths(FileSystem fs,
Path basePath,
long checkpointId)
Returns task temporary paths in this checkpoint.
|
Modifier and Type | Method and Description |
---|---|
void |
TableMetaStoreFactory.TableMetaStore.createOrAlterPartition(LinkedHashMap<String,String> partitionSpec,
Path partitionPath)
After data has been inserted into the partition path, the partition may need to be
created (if doesn't exists) or updated.
|
OutputFormat<T> |
OutputFormatFactory.createOutputFormat(Path path)
Create a
OutputFormat with specific path. |
static DataStreamSink<RowData> |
FileSystemTableSink.createStreamingSink(Configuration conf,
Path path,
List<String> partitionKeys,
ObjectIdentifier tableIdentifier,
boolean overwrite,
DataStream<RowData> inputStream,
StreamingFileSink.BucketsBuilder<RowData,String,? extends StreamingFileSink.BucketsBuilder<RowData,?,?>> bucketsBuilder,
TableMetaStoreFactory msFactory,
FileSystemFactory fsFactory,
long rollingCheckInterval) |
static void |
PartitionTempFileManager.deleteCheckpoint(FileSystem fs,
Path basePath,
long checkpointId)
Delete checkpoint path.
|
static long[] |
PartitionTempFileManager.headCheckpoints(FileSystem fs,
Path basePath,
long toCpId)
Returns checkpoints whose keys are less than or equal to
toCpId in temporary base
path. |
static List<Path> |
PartitionTempFileManager.listTaskTemporaryPaths(FileSystem fs,
Path basePath,
long checkpointId)
Returns task temporary paths in this checkpoint.
|
FileSystemOutputFormat.Builder<T> |
FileSystemOutputFormat.Builder.setTempPath(Path tmpPath) |
Modifier and Type | Method and Description |
---|---|
void |
PartitionLoader.loadNonPartition(List<Path> srcDirs)
Load a non-partition files to output path.
|
void |
PartitionLoader.loadPartition(LinkedHashMap<String,String> partSpec,
List<Path> srcDirs)
Load a single partition.
|
Constructor and Description |
---|
EmptyMetaStoreFactory(Path path) |
FileSystemTableSink(ObjectIdentifier tableIdentifier,
boolean isBounded,
TableSchema schema,
Path path,
List<String> partitionKeys,
String defaultPartName,
Map<String,String> properties)
Construct a file system table sink.
|
FileSystemTableSource(TableSchema schema,
Path path,
List<String> partitionKeys,
String defaultPartName,
Map<String,String> properties)
Construct a file system table source.
|
Constructor and Description |
---|
StreamingFileCommitter(Path locationPath,
ObjectIdentifier tableIdentifier,
List<String> partitionKeys,
TableMetaStoreFactory metaStoreFactory,
FileSystemFactory fsFactory,
Configuration conf) |
Modifier and Type | Method and Description |
---|---|
static List<Tuple2<LinkedHashMap<String,String>,Path>> |
PartitionPathUtils.searchPartSpecAndPaths(FileSystem fs,
Path path,
int partitionNumber)
Search all partitions in this path.
|
Modifier and Type | Method and Description |
---|---|
static LinkedHashMap<String,String> |
PartitionPathUtils.extractPartitionSpecFromPath(Path currPath)
Make partition spec from path.
|
static List<String> |
PartitionPathUtils.extractPartitionValues(Path currPath)
Make partition values from path.
|
static GenericRowData |
PartitionPathUtils.fillPartitionValueForRecord(String[] fieldNames,
DataType[] fieldTypes,
int[] selectFields,
List<String> partitionKeys,
Path path,
String defaultPartValue)
Extract partition value from path and fill to record.
|
static FileStatus[] |
PartitionPathUtils.listStatusWithoutHidden(FileSystem fs,
Path dir)
List file status without hidden files.
|
static List<Tuple2<LinkedHashMap<String,String>,Path>> |
PartitionPathUtils.searchPartSpecAndPaths(FileSystem fs,
Path path,
int partitionNumber)
Search all partitions in this path.
|
Modifier and Type | Method and Description |
---|---|
static void |
TestEnvironment.setAsContext(JobExecutor jobExecutor,
int parallelism,
Collection<Path> jarFiles,
Collection<URL> classPaths)
Sets the current
ExecutionEnvironment to be a TestEnvironment . |
Constructor and Description |
---|
TestEnvironment(JobExecutor jobExecutor,
int parallelism,
boolean isObjectReuseEnabled,
Collection<Path> jarFiles,
Collection<URL> classPaths) |
Modifier and Type | Method and Description |
---|---|
static Path |
FileUtils.absolutizePath(Path pathToAbsolutize)
Absolutize the given path if it is relative.
|
static Path |
FileUtils.compressDirectory(Path directory,
Path target) |
static Path |
FileUtils.expandDirectory(Path file,
Path targetDirectory) |
Modifier and Type | Method and Description |
---|---|
static Path |
FileUtils.absolutizePath(Path pathToAbsolutize)
Absolutize the given path if it is relative.
|
static Path |
FileUtils.compressDirectory(Path directory,
Path target) |
static void |
FileUtils.copy(Path sourcePath,
Path targetPath,
boolean executable)
Copies all files from source to target and sets executable flag.
|
static boolean |
FileUtils.deletePathIfEmpty(FileSystem fileSystem,
Path path)
Deletes the path if it is empty.
|
static Path |
FileUtils.expandDirectory(Path file,
Path targetDirectory) |
Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.