Constructor and Description |
---|
TableInputFormat(Configuration hConf)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
static Configuration |
HadoopUtils.getHadoopConfiguration(Configuration flinkConfiguration)
Returns a new Hadoop Configuration object using the path to the hadoop conf configured in the
main configuration (flink-conf.yaml).
|
Modifier and Type | Field and Description |
---|---|
protected Configuration |
HadoopOutputFormatBase.configuration |
Modifier and Type | Method and Description |
---|---|
Configuration |
HadoopOutputFormatBase.getConfiguration() |
Configuration |
HadoopInputFormatBase.getConfiguration() |
Modifier and Type | Method and Description |
---|---|
static void |
HadoopUtils.mergeHadoopConf(Configuration hadoopConfig)
Merge HadoopConfiguration into Configuration.
|
Constructor and Description |
---|
HBaseSinkFunction(String hTableName,
Configuration conf,
HBaseMutationConverter<T> mutationConverter,
long bufferFlushMaxSizeInBytes,
long bufferFlushMaxMutations,
long bufferFlushIntervalMillis) |
Modifier and Type | Field and Description |
---|---|
protected Configuration |
AbstractHBaseTableSource.conf |
protected Configuration |
AbstractHBaseDynamicTableSource.conf |
Constructor and Description |
---|
AbstractHBaseDynamicTableSource(Configuration conf,
String tableName,
HBaseTableSchema hbaseSchema,
String nullStringLiteral) |
AbstractHBaseTableSource(Configuration conf,
String tableName,
HBaseTableSchema hbaseSchema,
int[] projectFields) |
HBaseLookupFunction(Configuration configuration,
String hTableName,
HBaseTableSchema hbaseTableSchema) |
HBaseRowDataLookupFunction(Configuration configuration,
String hTableName,
HBaseTableSchema hbaseTableSchema,
String nullStringLiteral) |
Modifier and Type | Method and Description |
---|---|
static Configuration |
HBaseConfigurationUtil.createHBaseConf() |
static Configuration |
HBaseConfigurationUtil.deserializeConfiguration(byte[] serializedConfig,
Configuration targetConfig)
Deserialize a Hadoop
Configuration from byte[]. |
static Configuration |
HBaseConfigurationUtil.getHBaseConfiguration() |
Modifier and Type | Method and Description |
---|---|
static Configuration |
HBaseConfigurationUtil.deserializeConfiguration(byte[] serializedConfig,
Configuration targetConfig)
Deserialize a Hadoop
Configuration from byte[]. |
static byte[] |
HBaseConfigurationUtil.serializeConfiguration(Configuration conf)
Serialize a Hadoop
Configuration into byte[]. |
Modifier and Type | Method and Description |
---|---|
protected Configuration |
AbstractTableInputFormat.getHadoopConfiguration() |
Constructor and Description |
---|
AbstractTableInputFormat(Configuration hConf) |
HBaseDynamicTableSource(Configuration conf,
String tableName,
HBaseTableSchema hbaseSchema,
String nullStringLiteral) |
HBaseInputFormat(Configuration hConf)
Constructs a
InputFormat with hbase configuration to read data from hbase. |
HBaseRowDataInputFormat(Configuration conf,
String tableName,
HBaseTableSchema schema,
String nullStringLiteral) |
HBaseRowInputFormat(Configuration conf,
String tableName,
HBaseTableSchema schema) |
HBaseTableSource(Configuration conf,
String tableName)
The HBase configuration and the name of the table to read.
|
HBaseTableSource(Configuration conf,
String tableName,
HBaseTableSchema hbaseSchema,
int[] projectFields) |
Constructor and Description |
---|
HBaseDynamicTableSink(String hbaseTableName,
HBaseTableSchema hbaseTableSchema,
Configuration hbaseConf,
HBaseWriteOptions writeOptions,
String nullStringLiteral) |
HBaseUpsertTableSink(String tableName,
HBaseTableSchema hbaseTableSchema,
Configuration hconf,
HBaseWriteOptions writeOptions) |
Modifier and Type | Method and Description |
---|---|
protected Configuration |
AbstractTableInputFormat.getHadoopConfiguration() |
Constructor and Description |
---|
AbstractTableInputFormat(Configuration hConf) |
HBaseDynamicTableSource(Configuration conf,
String tableName,
HBaseTableSchema hbaseSchema,
String nullStringLiteral) |
HBaseInputFormat(Configuration hConf)
Constructs a
InputFormat with hbase configuration to read data from hbase. |
HBaseRowDataInputFormat(Configuration conf,
String tableName,
HBaseTableSchema schema,
String nullStringLiteral) |
HBaseRowInputFormat(Configuration conf,
String tableName,
HBaseTableSchema schema) |
HBaseTableSource(Configuration conf,
String tableName)
The HBase configuration and the name of the table to read.
|
HBaseTableSource(Configuration conf,
String tableName,
HBaseTableSchema hbaseSchema,
int[] projectFields) |
Modifier and Type | Method and Description |
---|---|
static org.apache.hadoop.hive.conf.HiveConf |
HiveConfUtils.create(Configuration conf)
Create HiveConf instance via Hadoop configuration.
|
Modifier and Type | Method and Description |
---|---|
CompressWriterFactory<IN> |
CompressWriterFactory.withHadoopCompression(String codecName,
Configuration hadoopConfig)
Compresses the data using the provided Hadoop
CompressionCodec and Configuration . |
Modifier and Type | Method and Description |
---|---|
HadoopFileCommitter |
DefaultHadoopFileCommitterFactory.create(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath) |
HadoopFileCommitter |
HadoopFileCommitterFactory.create(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath)
Creates a new Hadoop file committer for writing.
|
HadoopFileCommitter |
DefaultHadoopFileCommitterFactory.recoverForCommit(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath,
org.apache.hadoop.fs.Path tempFilePath) |
HadoopFileCommitter |
HadoopFileCommitterFactory.recoverForCommit(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath,
org.apache.hadoop.fs.Path inProgressPath)
Creates a Hadoop file committer for commit the pending file.
|
Constructor and Description |
---|
HadoopPathBasedBucketWriter(Configuration configuration,
HadoopPathBasedBulkWriter.Factory<IN> bulkWriterFactory,
HadoopFileCommitterFactory fileCommitterFactory) |
Constructor and Description |
---|
HadoopRenameFileCommitter(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath) |
HadoopRenameFileCommitter(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath,
org.apache.hadoop.fs.Path inProgressPath) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
ParquetColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType producedRowType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Create a partitioned
ParquetColumnarRowInputFormat , the partition columns can be
generated by Path . |
ParquetTableSource.Builder |
ParquetTableSource.Builder.withConfiguration(Configuration config)
Sets a Hadoop
Configuration for the Parquet Reader. |
Constructor and Description |
---|
ParquetColumnarRowInputFormat(Configuration hadoopConfig,
RowType projectedType,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Constructor to create parquet format without extra fields.
|
ParquetColumnarRowInputFormat(Configuration hadoopConfig,
RowType projectedType,
RowType producedType,
ColumnBatchFactory<SplitT> batchFactory,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Constructor to create parquet format with extra fields created by
ColumnBatchFactory . |
Modifier and Type | Method and Description |
---|---|
static ParquetWriterFactory<RowData> |
ParquetRowDataBuilder.createWriterFactory(RowType rowType,
Configuration conf,
boolean utcTimestamp)
Create a parquet
BulkWriter.Factory . |
protected org.apache.parquet.hadoop.api.WriteSupport<RowData> |
ParquetRowDataBuilder.getWriteSupport(Configuration conf) |
Constructor and Description |
---|
FlinkParquetBuilder(RowType rowType,
Configuration conf,
boolean utcTimestamp) |
Modifier and Type | Method and Description |
---|---|
Configuration |
SerializableConfiguration.conf() |
Modifier and Type | Method and Description |
---|---|
void |
ParquetRecordReader.initialize(org.apache.parquet.hadoop.ParquetFileReader reader,
Configuration configuration) |
org.apache.parquet.io.api.RecordMaterializer<Row> |
RowReadSupport.prepareForRead(Configuration configuration,
Map<String,String> keyValueMetaData,
org.apache.parquet.schema.MessageType fileSchema,
org.apache.parquet.hadoop.api.ReadSupport.ReadContext readContext) |
Constructor and Description |
---|
SerializableConfiguration(Configuration conf) |
Modifier and Type | Method and Description |
---|---|
static ParquetColumnarRowSplitReader |
ParquetSplitReaderUtil.genPartColumnarRowReader(boolean utcTimestamp,
boolean caseSensitive,
Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
ParquetColumnarRowSplitReader . |
Constructor and Description |
---|
ParquetColumnarRowSplitReader(boolean utcTimestamp,
boolean caseSensitive,
Configuration conf,
LogicalType[] selectedTypes,
String[] selectedFieldNames,
ParquetColumnarRowSplitReader.ColumnBatchGenerator generator,
int batchSize,
org.apache.hadoop.fs.Path path,
long splitStart,
long splitLength) |
Constructor and Description |
---|
SequenceFileWriterFactory(Configuration hadoopConf,
Class<K> keyClass,
Class<V> valueClass)
Creates a new SequenceFileWriterFactory using the given builder to assemble the
SequenceFileWriter.
|
SequenceFileWriterFactory(Configuration hadoopConf,
Class<K> keyClass,
Class<V> valueClass,
String compressionCodecName)
Creates a new SequenceFileWriterFactory using the given builder to assemble the
SequenceFileWriter.
|
SequenceFileWriterFactory(Configuration hadoopConf,
Class<K> keyClass,
Class<V> valueClass,
String compressionCodecName,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType)
Creates a new SequenceFileWriterFactory using the given builder to assemble the
SequenceFileWriter.
|
Modifier and Type | Method and Description |
---|---|
String |
EnvironmentVariableKeyProvider.getStorageAccountKey(String s,
Configuration configuration) |
Modifier and Type | Method and Description |
---|---|
protected abstract URI |
AbstractS3FileSystemFactory.getInitURI(URI fsUri,
Configuration hadoopConfig) |
Modifier and Type | Method and Description |
---|---|
protected URI |
S3FileSystemFactory.getInitURI(URI fsUri,
Configuration hadoopConfig) |
Constructor and Description |
---|
HadoopS3AccessHelper(org.apache.hadoop.fs.s3a.S3AFileSystem s3a,
Configuration conf) |
Modifier and Type | Method and Description |
---|---|
protected URI |
S3FileSystemFactory.getInitURI(URI fsUri,
Configuration hadoopConfig) |
Modifier and Type | Method and Description |
---|---|
Configuration |
HCatInputFormatBase.getConfiguration()
Returns the
Configuration of the HCatInputFormat. |
Constructor and Description |
---|
HCatInputFormatBase(String database,
String table,
Configuration config)
Creates a HCatInputFormat for the given database, table, and
Configuration . |
Constructor and Description |
---|
HCatInputFormat(String database,
String table,
Configuration config) |
Modifier and Type | Field and Description |
---|---|
protected Configuration |
OrcInputFormat.conf |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcColumnarRowFileInputFormat.createPartitionedFormat(OrcShim<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> shim,
Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize)
Create a partitioned
OrcColumnarRowFileInputFormat , the partition columns can be
generated by split. |
static OrcColumnarRowSplitReader<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> |
OrcSplitReaderUtil.genPartColumnarRowReader(String hiveVersion,
Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
OrcColumnarRowSplitReader . |
OrcTableSource.Builder |
OrcTableSource.Builder.withConfiguration(Configuration config)
Sets a Hadoop
Configuration for the ORC reader. |
Constructor and Description |
---|
AbstractOrcFileInputFormat(OrcShim<BatchT> shim,
Configuration hadoopConfig,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize) |
OrcColumnarRowFileInputFormat(OrcShim<BatchT> shim,
Configuration hadoopConfig,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
ColumnBatchFactory<BatchT,SplitT> batchFactory,
RowType projectedOutputType) |
OrcColumnarRowSplitReader(OrcShim<BATCH> shim,
Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
OrcColumnarRowSplitReader.ColumnBatchGenerator<BATCH> batchGenerator,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength) |
OrcInputFormat(Path path,
org.apache.orc.TypeDescription orcSchema,
Configuration orcConfig,
int batchSize)
Creates an OrcInputFormat.
|
OrcRowInputFormat(String path,
String schemaString,
Configuration orcConfig)
Creates an OrcRowInputFormat.
|
OrcRowInputFormat(String path,
String schemaString,
Configuration orcConfig,
int batchSize)
Creates an OrcRowInputFormat.
|
OrcRowInputFormat(String path,
org.apache.orc.TypeDescription orcSchema,
Configuration orcConfig,
int batchSize)
Creates an OrcRowInputFormat.
|
OrcRowSplitReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength) |
OrcSplitReader(OrcShim<BATCH> shim,
Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcNoHiveColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize)
Create a partitioned
OrcColumnarRowFileInputFormat , the partition columns can be
generated by split. |
static OrcColumnarRowSplitReader<org.apache.orc.storage.ql.exec.vector.VectorizedRowBatch> |
OrcNoHiveSplitReaderUtil.genPartColumnarRowReader(Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
OrcColumnarRowSplitReader . |
Constructor and Description |
---|
OrcNoHiveBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
Modifier and Type | Method and Description |
---|---|
org.apache.orc.RecordReader |
OrcNoHiveShim.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength) |
Modifier and Type | Method and Description |
---|---|
protected org.apache.orc.Reader |
OrcShimV200.createReader(org.apache.hadoop.fs.Path path,
Configuration conf) |
protected org.apache.orc.Reader |
OrcShimV230.createReader(org.apache.hadoop.fs.Path path,
Configuration conf) |
org.apache.orc.RecordReader |
OrcShimV200.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength) |
org.apache.orc.RecordReader |
OrcShim.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength)
Create orc
RecordReader from conf, schema and etc... |
protected org.apache.orc.Reader.Options |
OrcShimV200.readOrcConf(org.apache.orc.Reader.Options options,
Configuration conf) |
protected org.apache.orc.Reader.Options |
OrcShimV230.readOrcConf(org.apache.orc.Reader.Options options,
Configuration conf) |
Modifier and Type | Method and Description |
---|---|
Configuration |
SerializableHadoopConfigWrapper.getHadoopConfig() |
Constructor and Description |
---|
SerializableHadoopConfigWrapper(Configuration hadoopConfig) |
Modifier and Type | Class and Description |
---|---|
class |
ThreadLocalClassLoaderConfiguration
Workaround for https://issues.apache.org/jira/browse/ORC-653.
|
Constructor and Description |
---|
OrcBulkWriterFactory(Vectorizer<T> vectorizer,
Configuration configuration)
Creates a new OrcBulkWriterFactory using the provided Vectorizer, Hadoop Configuration.
|
OrcBulkWriterFactory(Vectorizer<T> vectorizer,
Properties writerProperties,
Configuration configuration)
Creates a new OrcBulkWriterFactory using the provided Vectorizer, Hadoop Configuration, ORC
writer properties.
|
ThreadLocalClassLoaderConfiguration(Configuration other) |
Constructor and Description |
---|
HadoopModule(SecurityConfiguration securityConfiguration,
Configuration hadoopConfiguration) |
Modifier and Type | Method and Description |
---|---|
static Configuration |
HadoopUtils.getHadoopConfiguration(Configuration flinkConfiguration) |
Configuration |
HadoopConfigLoader.getOrLoadHadoopConfig()
get the loaded Hadoop config (or fall back to one loaded from the classpath).
|
Modifier and Type | Method and Description |
---|---|
Configuration |
SerializableConfiguration.getConfiguration() |
Modifier and Type | Method and Description |
---|---|
T |
HadoopPathBasedBulkFormatBuilder.withConfiguration(Configuration configuration) |
Constructor and Description |
---|
HadoopPathBasedBulkFormatBuilder(org.apache.hadoop.fs.Path basePath,
HadoopPathBasedBulkWriter.Factory<IN> writerFactory,
Configuration configuration,
BucketAssigner<IN,BucketID> assigner) |
HadoopPathBasedBulkFormatBuilder(org.apache.hadoop.fs.Path basePath,
HadoopPathBasedBulkWriter.Factory<IN> writerFactory,
HadoopFileCommitterFactory fileCommitterFactory,
Configuration configuration,
BucketAssigner<IN,BucketID> assigner,
CheckpointRollingPolicy<IN,BucketID> policy,
BucketFactory<IN,BucketID> bucketFactory,
OutputFileConfig outputFileConfig) |
SerializableConfiguration(Configuration configuration) |
Modifier and Type | Method and Description |
---|---|
BulkWriter.Factory<RowData> |
HiveShimV200.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
BulkWriter.Factory<RowData> |
HiveShimV100.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
BulkWriter.Factory<RowData> |
HiveShim.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes)
Create orc
BulkWriter.Factory for different hive versions. |
void |
HiveShimV210.createTableWithConstraints(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits) |
void |
HiveShimV100.createTableWithConstraints(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits) |
void |
HiveShimV310.createTableWithConstraints(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits) |
void |
HiveShim.createTableWithConstraints(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits)
Creates a table with PK and NOT NULL constraints.
|
void |
HiveMetastoreClientWrapper.createTableWithConstraints(org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits) |
List<org.apache.hadoop.hive.metastore.api.FieldSchema> |
HiveShimV110.getFieldsFromDeserializer(Configuration conf,
org.apache.hadoop.hive.metastore.api.Table table,
boolean skipConfError) |
List<org.apache.hadoop.hive.metastore.api.FieldSchema> |
HiveShimV100.getFieldsFromDeserializer(Configuration conf,
org.apache.hadoop.hive.metastore.api.Table table,
boolean skipConfError) |
List<org.apache.hadoop.hive.metastore.api.FieldSchema> |
HiveShim.getFieldsFromDeserializer(Configuration conf,
org.apache.hadoop.hive.metastore.api.Table table,
boolean skipConfError)
Get Hive table schema from deserializer.
|
Set<String> |
HiveMetastoreClientWrapper.getNotNullColumns(Configuration conf,
String dbName,
String tableName) |
Set<String> |
HiveShimV100.getNotNullColumns(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
Configuration conf,
String dbName,
String tableName) |
Set<String> |
HiveShimV310.getNotNullColumns(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
Configuration conf,
String dbName,
String tableName) |
Set<String> |
HiveShim.getNotNullColumns(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
Configuration conf,
String dbName,
String tableName)
Get the set of columns that have NOT NULL constraints.
|
Modifier and Type | Method and Description |
---|---|
static Configuration |
HiveTableUtil.getHadoopConfiguration(String hadoopConfDir)
Returns a new Hadoop Configuration object using the path to the hadoop conf configured.
|
Modifier and Type | Method and Description |
---|---|
static void |
Utils.setTokensFor(org.apache.hadoop.yarn.api.records.ContainerLaunchContext amContainer,
List<org.apache.hadoop.fs.Path> paths,
Configuration conf) |
static void |
Utils.setupYarnClassPath(Configuration conf,
Map<String,String> appMasterEnv) |
Modifier and Type | Method and Description |
---|---|
void |
Configuration.addResource(Configuration conf)
Add a configuration resource.
|
static void |
Configuration.dumpConfiguration(Configuration config,
String propertyName,
Writer out)
Writes properties and their attributes (final and resource) to the given
Writer . |
static void |
Configuration.dumpConfiguration(Configuration config,
Writer out)
Writes out all properties and their attributes (final and resource) to the given
Writer , the format of the output would be, |
Constructor and Description |
---|
Configuration(Configuration other)
A new configuration with the same settings cloned from another.
|
Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.