Constructor and Description |
---|
TableInputFormat(Configuration hConf)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
static Configuration |
HadoopUtils.getHadoopConfiguration(Configuration flinkConfiguration)
Returns a new Hadoop Configuration object using the path to the hadoop conf configured in the
main configuration (flink-conf.yaml).
|
Modifier and Type | Field and Description |
---|---|
protected Configuration |
HadoopOutputFormatBase.configuration |
Modifier and Type | Method and Description |
---|---|
Configuration |
HadoopOutputFormatBase.getConfiguration() |
Configuration |
HadoopInputFormatBase.getConfiguration() |
Modifier and Type | Method and Description |
---|---|
static void |
HadoopUtils.mergeHadoopConf(Configuration hadoopConfig)
Merge HadoopConfiguration into Configuration.
|
Constructor and Description |
---|
HBaseSinkFunction(String hTableName,
Configuration conf,
HBaseMutationConverter<T> mutationConverter,
long bufferFlushMaxSizeInBytes,
long bufferFlushMaxMutations,
long bufferFlushIntervalMillis) |
Constructor and Description |
---|
HBaseDynamicTableSource(Configuration conf,
String tableName,
HBaseTableSchema hbaseSchema,
String nullStringLiteral) |
HBaseInputFormat(Configuration hConf)
Constructs a
InputFormat with hbase configuration to read data from hbase. |
HBaseLookupFunction(Configuration configuration,
String hTableName,
HBaseTableSchema hbaseTableSchema) |
HBaseRowDataInputFormat(Configuration conf,
String tableName,
HBaseTableSchema schema,
String nullStringLiteral) |
HBaseRowDataLookupFunction(Configuration configuration,
String hTableName,
HBaseTableSchema hbaseTableSchema,
String nullStringLiteral) |
HBaseRowInputFormat(Configuration conf,
String tableName,
HBaseTableSchema schema) |
HBaseTableSource(Configuration conf,
String tableName)
The HBase configuration and the name of the table to read.
|
HBaseTableSource(Configuration conf,
String tableName,
HBaseTableSchema hbaseSchema,
int[] projectFields) |
Modifier and Type | Method and Description |
---|---|
static Configuration |
HBaseConfigurationUtil.deserializeConfiguration(byte[] serializedConfig,
Configuration targetConfig)
Deserialize a Hadoop
Configuration from byte[]. |
Modifier and Type | Method and Description |
---|---|
static Configuration |
HBaseConfigurationUtil.deserializeConfiguration(byte[] serializedConfig,
Configuration targetConfig)
Deserialize a Hadoop
Configuration from byte[]. |
static byte[] |
HBaseConfigurationUtil.serializeConfiguration(Configuration conf)
Serialize a Hadoop
Configuration into byte[]. |
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.hive.conf.HiveConf |
HiveConfUtils.create(Configuration conf)
Create HiveConf instance via Hadoop configuration.
|
Modifier and Type | Method and Description |
---|---|
CompressWriterFactory<IN> |
CompressWriterFactory.withHadoopCompression(String codecName,
Configuration hadoopConfig)
Compresses the data using the provided Hadoop
CompressionCodec and Configuration . |
Modifier and Type | Method and Description |
---|---|
HadoopFileCommitter |
DefaultHadoopFileCommitterFactory.create(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath) |
HadoopFileCommitter |
HadoopFileCommitterFactory.create(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath)
Creates a new Hadoop file committer for writing.
|
HadoopFileCommitter |
DefaultHadoopFileCommitterFactory.recoverForCommit(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath,
org.apache.hadoop.fs.Path tempFilePath) |
HadoopFileCommitter |
HadoopFileCommitterFactory.recoverForCommit(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath,
org.apache.hadoop.fs.Path inProgressPath)
Creates a Hadoop file committer for commit the pending file.
|
Constructor and Description |
---|
HadoopPathBasedBucketWriter(Configuration configuration,
HadoopPathBasedBulkWriter.Factory<IN> bulkWriterFactory,
HadoopFileCommitterFactory fileCommitterFactory) |
Constructor and Description |
---|
HadoopRenameFileCommitter(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath) |
HadoopRenameFileCommitter(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath,
org.apache.hadoop.fs.Path inProgressPath) |
Modifier and Type | Method and Description |
---|---|
ParquetTableSource.Builder |
ParquetTableSource.Builder.withConfiguration(Configuration config)
Sets a Hadoop
Configuration for the Parquet Reader. |
Constructor and Description |
---|
ParquetInputFormat(Path[] paths,
String[] fullFieldNames,
DataType[] fullFieldTypes,
int[] selectedFields,
String partDefaultName,
long limit,
Configuration conf,
boolean utcTimestamp) |
Modifier and Type | Method and Description |
---|---|
static ParquetWriterFactory<RowData> |
ParquetRowDataBuilder.createWriterFactory(RowType rowType,
Configuration conf,
boolean utcTimestamp)
Create a parquet
BulkWriter.Factory . |
protected org.apache.parquet.hadoop.api.WriteSupport<RowData> |
ParquetRowDataBuilder.getWriteSupport(Configuration conf) |
Constructor and Description |
---|
FlinkParquetBuilder(RowType rowType,
Configuration conf,
boolean utcTimestamp) |
Modifier and Type | Method and Description |
---|---|
Configuration |
SerializableConfiguration.conf() |
Modifier and Type | Method and Description |
---|---|
void |
ParquetRecordReader.initialize(org.apache.parquet.hadoop.ParquetFileReader reader,
Configuration configuration) |
org.apache.parquet.io.api.RecordMaterializer<Row> |
RowReadSupport.prepareForRead(Configuration configuration,
Map<String,String> keyValueMetaData,
org.apache.parquet.schema.MessageType fileSchema,
org.apache.parquet.hadoop.api.ReadSupport.ReadContext readContext) |
Constructor and Description |
---|
SerializableConfiguration(Configuration conf) |
Modifier and Type | Method and Description |
---|---|
static ParquetColumnarRowSplitReader |
ParquetSplitReaderUtil.genPartColumnarRowReader(boolean utcTimestamp,
boolean caseSensitive,
Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
ParquetColumnarRowSplitReader . |
Constructor and Description |
---|
ParquetColumnarRowSplitReader(boolean utcTimestamp,
boolean caseSensitive,
Configuration conf,
LogicalType[] selectedTypes,
String[] selectedFieldNames,
ParquetColumnarRowSplitReader.ColumnBatchGenerator generator,
int batchSize,
org.apache.hadoop.fs.Path path,
long splitStart,
long splitLength) |
Constructor and Description |
---|
SequenceFileWriterFactory(Configuration hadoopConf,
Class<K> keyClass,
Class<V> valueClass)
Creates a new SequenceFileWriterFactory using the given builder to assemble the
SequenceFileWriter.
|
SequenceFileWriterFactory(Configuration hadoopConf,
Class<K> keyClass,
Class<V> valueClass,
String compressionCodecName)
Creates a new SequenceFileWriterFactory using the given builder to assemble the
SequenceFileWriter.
|
SequenceFileWriterFactory(Configuration hadoopConf,
Class<K> keyClass,
Class<V> valueClass,
String compressionCodecName,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType)
Creates a new SequenceFileWriterFactory using the given builder to assemble the
SequenceFileWriter.
|
Modifier and Type | Method and Description |
---|---|
String |
EnvironmentVariableKeyProvider.getStorageAccountKey(String s,
Configuration configuration) |
Modifier and Type | Method and Description |
---|---|
protected abstract URI |
AbstractS3FileSystemFactory.getInitURI(URI fsUri,
Configuration hadoopConfig) |
Modifier and Type | Method and Description |
---|---|
protected URI |
S3FileSystemFactory.getInitURI(URI fsUri,
Configuration hadoopConfig) |
Constructor and Description |
---|
HadoopS3AccessHelper(org.apache.hadoop.fs.s3a.S3AFileSystem s3a,
Configuration conf) |
Modifier and Type | Method and Description |
---|---|
protected URI |
S3FileSystemFactory.getInitURI(URI fsUri,
Configuration hadoopConfig) |
Modifier and Type | Method and Description |
---|---|
Configuration |
HCatInputFormatBase.getConfiguration()
Returns the
Configuration of the HCatInputFormat. |
Constructor and Description |
---|
HCatInputFormatBase(String database,
String table,
Configuration config)
Creates a HCatInputFormat for the given database, table, and
Configuration . |
Constructor and Description |
---|
HCatInputFormat(String database,
String table,
Configuration config) |
Modifier and Type | Field and Description |
---|---|
protected Configuration |
OrcInputFormat.conf |
Modifier and Type | Method and Description |
---|---|
static OrcColumnarRowSplitReader<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> |
OrcSplitReaderUtil.genPartColumnarRowReader(String hiveVersion,
Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
OrcColumnarRowSplitReader . |
OrcTableSource.Builder |
OrcTableSource.Builder.withConfiguration(Configuration config)
Sets a Hadoop
Configuration for the ORC reader. |
Constructor and Description |
---|
OrcColumnarRowSplitReader(OrcShim<BATCH> shim,
Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
OrcColumnarRowSplitReader.ColumnBatchGenerator<BATCH> batchGenerator,
List<OrcSplitReader.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength) |
OrcInputFormat(Path path,
org.apache.orc.TypeDescription orcSchema,
Configuration orcConfig,
int batchSize)
Creates an OrcInputFormat.
|
OrcRowInputFormat(String path,
String schemaString,
Configuration orcConfig)
Creates an OrcRowInputFormat.
|
OrcRowInputFormat(String path,
String schemaString,
Configuration orcConfig,
int batchSize)
Creates an OrcRowInputFormat.
|
OrcRowInputFormat(String path,
org.apache.orc.TypeDescription orcSchema,
Configuration orcConfig,
int batchSize)
Creates an OrcRowInputFormat.
|
OrcRowSplitReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength) |
OrcSplitReader(OrcShim<BATCH> shim,
Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength) |
Modifier and Type | Method and Description |
---|---|
static OrcColumnarRowSplitReader<org.apache.orc.storage.ql.exec.vector.VectorizedRowBatch> |
OrcNoHiveSplitReaderUtil.genPartColumnarRowReader(Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
OrcColumnarRowSplitReader . |
Constructor and Description |
---|
OrcNoHiveBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
Modifier and Type | Method and Description |
---|---|
org.apache.orc.RecordReader |
OrcNoHiveShim.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength) |
Modifier and Type | Method and Description |
---|---|
protected org.apache.orc.Reader |
OrcShimV200.createReader(org.apache.hadoop.fs.Path path,
Configuration conf) |
protected org.apache.orc.Reader |
OrcShimV230.createReader(org.apache.hadoop.fs.Path path,
Configuration conf) |
org.apache.orc.RecordReader |
OrcShimV200.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength) |
org.apache.orc.RecordReader |
OrcShim.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcSplitReader.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength)
Create orc
RecordReader from conf, schema and etc... |
protected org.apache.orc.Reader.Options |
OrcShimV200.readOrcConf(org.apache.orc.Reader.Options options,
Configuration conf) |
protected org.apache.orc.Reader.Options |
OrcShimV230.readOrcConf(org.apache.orc.Reader.Options options,
Configuration conf) |
Constructor and Description |
---|
OrcBulkWriterFactory(Vectorizer<T> vectorizer,
Configuration configuration)
Creates a new OrcBulkWriterFactory using the provided Vectorizer, Hadoop Configuration.
|
OrcBulkWriterFactory(Vectorizer<T> vectorizer,
Properties writerProperties,
Configuration configuration)
Creates a new OrcBulkWriterFactory using the provided Vectorizer, Hadoop Configuration, ORC
writer properties.
|
Constructor and Description |
---|
HadoopModule(SecurityConfiguration securityConfiguration,
Configuration hadoopConfiguration) |
Modifier and Type | Method and Description |
---|---|
static Configuration |
HadoopUtils.getHadoopConfiguration(Configuration flinkConfiguration) |
Configuration |
HadoopConfigLoader.getOrLoadHadoopConfig()
get the loaded Hadoop config (or fall back to one loaded from the classpath).
|
Modifier and Type | Method and Description |
---|---|
Configuration |
SerializableConfiguration.getConfiguration() |
Modifier and Type | Method and Description |
---|---|
T |
HadoopPathBasedBulkFormatBuilder.withConfiguration(Configuration configuration) |
Constructor and Description |
---|
HadoopPathBasedBulkFormatBuilder(org.apache.hadoop.fs.Path basePath,
HadoopPathBasedBulkWriter.Factory<IN> writerFactory,
Configuration configuration,
BucketAssigner<IN,BucketID> assigner) |
HadoopPathBasedBulkFormatBuilder(org.apache.hadoop.fs.Path basePath,
HadoopPathBasedBulkWriter.Factory<IN> writerFactory,
HadoopFileCommitterFactory fileCommitterFactory,
Configuration configuration,
BucketAssigner<IN,BucketID> assigner,
CheckpointRollingPolicy<IN,BucketID> policy,
BucketFactory<IN,BucketID> bucketFactory,
OutputFileConfig outputFileConfig) |
SerializableConfiguration(Configuration configuration) |
Modifier and Type | Method and Description |
---|---|
BucketingSink<T> |
BucketingSink.setFSConfig(Configuration config)
Deprecated.
Specify a custom
Configuration that will be used when creating the FileSystem
for writing. |
Modifier and Type | Method and Description |
---|---|
BulkWriter.Factory<RowData> |
HiveShimV200.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
BulkWriter.Factory<RowData> |
HiveShimV100.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
BulkWriter.Factory<RowData> |
HiveShim.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes)
Create orc
BulkWriter.Factory for different hive versions. |
void |
HiveShimV210.createTableWithConstraints(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits) |
void |
HiveShimV100.createTableWithConstraints(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits) |
void |
HiveShimV310.createTableWithConstraints(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits) |
void |
HiveShim.createTableWithConstraints(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits)
Creates a table with PK and NOT NULL constraints.
|
void |
HiveMetastoreClientWrapper.createTableWithConstraints(org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits) |
List<org.apache.hadoop.hive.metastore.api.FieldSchema> |
HiveShimV110.getFieldsFromDeserializer(Configuration conf,
org.apache.hadoop.hive.metastore.api.Table table,
boolean skipConfError) |
List<org.apache.hadoop.hive.metastore.api.FieldSchema> |
HiveShimV100.getFieldsFromDeserializer(Configuration conf,
org.apache.hadoop.hive.metastore.api.Table table,
boolean skipConfError) |
List<org.apache.hadoop.hive.metastore.api.FieldSchema> |
HiveShim.getFieldsFromDeserializer(Configuration conf,
org.apache.hadoop.hive.metastore.api.Table table,
boolean skipConfError)
Get Hive table schema from deserializer.
|
Set<String> |
HiveMetastoreClientWrapper.getNotNullColumns(Configuration conf,
String dbName,
String tableName) |
Set<String> |
HiveShimV100.getNotNullColumns(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
Configuration conf,
String dbName,
String tableName) |
Set<String> |
HiveShimV310.getNotNullColumns(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
Configuration conf,
String dbName,
String tableName) |
Set<String> |
HiveShim.getNotNullColumns(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
Configuration conf,
String dbName,
String tableName)
Get the set of columns that have NOT NULL constraints.
|
Modifier and Type | Method and Description |
---|---|
static void |
Utils.setTokensFor(org.apache.hadoop.yarn.api.records.ContainerLaunchContext amContainer,
List<org.apache.hadoop.fs.Path> paths,
Configuration conf) |
static void |
Utils.setupYarnClassPath(Configuration conf,
Map<String,String> appMasterEnv) |
Modifier and Type | Method and Description |
---|---|
void |
Configuration.addResource(Configuration conf)
Add a configuration resource.
|
static void |
Configuration.dumpConfiguration(Configuration config,
String propertyName,
Writer out)
Writes properties and their attributes (final and resource) to the given
Writer . |
static void |
Configuration.dumpConfiguration(Configuration config,
Writer out)
Writes out all properties and their attributes (final and resource) to the given
Writer , the format of the output would be, |
Constructor and Description |
---|
Configuration(Configuration other)
A new configuration with the same settings cloned from another.
|
Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.