Modifier and Type | Method and Description |
---|---|
static Configuration |
HadoopUtils.getHadoopConfiguration(Configuration flinkConfiguration)
Returns a new Hadoop Configuration object using the path to the hadoop conf configured in the
main configuration (flink-conf.yaml).
|
Modifier and Type | Field and Description |
---|---|
protected Configuration |
HadoopOutputFormatBase.configuration |
Modifier and Type | Method and Description |
---|---|
Configuration |
HadoopInputFormatBase.getConfiguration() |
Configuration |
HadoopOutputFormatBase.getConfiguration() |
Modifier and Type | Method and Description |
---|---|
static void |
HadoopUtils.mergeHadoopConf(Configuration hadoopConfig)
Merge HadoopConfiguration into Configuration.
|
Constructor and Description |
---|
HBaseSinkFunction(String hTableName,
Configuration conf,
HBaseMutationConverter<T> mutationConverter,
long bufferFlushMaxSizeInBytes,
long bufferFlushMaxMutations,
long bufferFlushIntervalMillis) |
Modifier and Type | Field and Description |
---|---|
protected Configuration |
AbstractHBaseDynamicTableSource.conf |
Constructor and Description |
---|
AbstractHBaseDynamicTableSource(Configuration conf,
String tableName,
HBaseTableSchema hbaseSchema,
String nullStringLiteral,
int maxRetryTimes,
LookupCache cache) |
HBaseRowDataLookupFunction(Configuration configuration,
String hTableName,
HBaseTableSchema hbaseTableSchema,
String nullStringLiteral,
int maxRetryTimes) |
Modifier and Type | Method and Description |
---|---|
static Configuration |
HBaseConnectorOptionsUtil.getHBaseConfiguration(ReadableConfig tableOptions)
config HBase Configuration.
|
Modifier and Type | Method and Description |
---|---|
static Configuration |
HBaseConfigurationUtil.createHBaseConf() |
static Configuration |
HBaseConfigurationUtil.deserializeConfiguration(byte[] serializedConfig,
Configuration targetConfig)
Deserialize a Hadoop
Configuration from byte[]. |
static Configuration |
HBaseConfigurationUtil.getHBaseConfiguration() |
Modifier and Type | Method and Description |
---|---|
static Configuration |
HBaseConfigurationUtil.deserializeConfiguration(byte[] serializedConfig,
Configuration targetConfig)
Deserialize a Hadoop
Configuration from byte[]. |
static byte[] |
HBaseConfigurationUtil.serializeConfiguration(Configuration conf)
Serialize a Hadoop
Configuration into byte[]. |
Modifier and Type | Method and Description |
---|---|
Configuration |
HBaseDynamicTableSink.getConfiguration() |
Constructor and Description |
---|
HBaseDynamicTableSink(String tableName,
HBaseTableSchema hbaseTableSchema,
Configuration hbaseConf,
HBaseWriteOptions writeOptions,
String nullStringLiteral) |
Modifier and Type | Method and Description |
---|---|
protected Configuration |
AbstractTableInputFormat.getHadoopConfiguration() |
Constructor and Description |
---|
AbstractTableInputFormat(Configuration hConf) |
HBaseDynamicTableSource(Configuration conf,
String tableName,
HBaseTableSchema hbaseSchema,
String nullStringLiteral,
int maxRetryTimes,
LookupCache cache) |
HBaseRowDataInputFormat(Configuration conf,
String tableName,
HBaseTableSchema schema,
String nullStringLiteral) |
Modifier and Type | Method and Description |
---|---|
Configuration |
HBaseDynamicTableSink.getConfiguration() |
Constructor and Description |
---|
HBaseDynamicTableSink(String tableName,
HBaseTableSchema hbaseTableSchema,
Configuration hbaseConf,
HBaseWriteOptions writeOptions,
String nullStringLiteral) |
Modifier and Type | Method and Description |
---|---|
protected Configuration |
AbstractTableInputFormat.getHadoopConfiguration() |
Constructor and Description |
---|
AbstractTableInputFormat(Configuration hConf) |
HBaseDynamicTableSource(Configuration conf,
String tableName,
HBaseTableSchema hbaseSchema,
String nullStringLiteral,
int maxRetryTimes,
boolean lookupAsync,
LookupCache cache) |
HBaseRowDataAsyncLookupFunction(Configuration configuration,
String hTableName,
HBaseTableSchema hbaseTableSchema,
String nullStringLiteral,
int maxRetryTimes) |
HBaseRowDataInputFormat(Configuration conf,
String tableName,
HBaseTableSchema schema,
String nullStringLiteral) |
Modifier and Type | Method and Description |
---|---|
static HiveConf |
HiveConfUtils.create(Configuration conf)
Create HiveConf instance via Hadoop configuration.
|
static org.apache.hadoop.mapred.JobConf |
JobConfUtils.createJobConfWithCredentials(Configuration configuration) |
Modifier and Type | Method and Description |
---|---|
CompressWriterFactory<IN> |
CompressWriterFactory.withHadoopCompression(String codecName,
Configuration hadoopConfig)
Compresses the data using the provided Hadoop
CompressionCodec and Configuration . |
Modifier and Type | Method and Description |
---|---|
HadoopFileCommitter |
DefaultHadoopFileCommitterFactory.create(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath) |
HadoopFileCommitter |
HadoopFileCommitterFactory.create(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath)
Creates a new Hadoop file committer for writing.
|
HadoopFileCommitter |
DefaultHadoopFileCommitterFactory.recoverForCommit(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath,
org.apache.hadoop.fs.Path tempFilePath) |
HadoopFileCommitter |
HadoopFileCommitterFactory.recoverForCommit(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath,
org.apache.hadoop.fs.Path inProgressPath)
Creates a Hadoop file committer for commit the pending file.
|
Constructor and Description |
---|
HadoopPathBasedBucketWriter(Configuration configuration,
HadoopPathBasedBulkWriter.Factory<IN> bulkWriterFactory,
HadoopFileCommitterFactory fileCommitterFactory) |
Constructor and Description |
---|
HadoopRenameFileCommitter(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath) |
HadoopRenameFileCommitter(Configuration configuration,
org.apache.hadoop.fs.Path targetFilePath,
org.apache.hadoop.fs.Path inProgressPath) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
ParquetColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType producedRowType,
TypeInformation<RowData> producedTypeInfo,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Create a partitioned
ParquetColumnarRowInputFormat , the partition columns can be
generated by Path . |
Constructor and Description |
---|
ParquetColumnarRowInputFormat(Configuration hadoopConfig,
RowType projectedType,
TypeInformation<RowData> producedTypeInfo,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Constructor to create parquet format without extra fields.
|
Modifier and Type | Method and Description |
---|---|
protected org.apache.parquet.hadoop.api.WriteSupport<T> |
ParquetProtoWriters.ParquetProtoWriterBuilder.getWriteSupport(Configuration conf) |
Modifier and Type | Method and Description |
---|---|
static ParquetWriterFactory<RowData> |
ParquetRowDataBuilder.createWriterFactory(RowType rowType,
Configuration conf,
boolean utcTimestamp)
Create a parquet
BulkWriter.Factory . |
protected org.apache.parquet.hadoop.api.WriteSupport<RowData> |
ParquetRowDataBuilder.getWriteSupport(Configuration conf) |
Constructor and Description |
---|
FlinkParquetBuilder(RowType rowType,
Configuration conf,
boolean utcTimestamp) |
Modifier and Type | Method and Description |
---|---|
Configuration |
SerializableConfiguration.conf() |
Modifier and Type | Method and Description |
---|---|
static TableStats |
ParquetFormatStatisticsReportUtil.getTableStatistics(List<Path> files,
DataType producedDataType,
Configuration hadoopConfig,
boolean isUtcTimestamp) |
Constructor and Description |
---|
SerializableConfiguration(Configuration conf) |
Modifier and Type | Method and Description |
---|---|
static ParquetColumnarRowSplitReader |
ParquetSplitReaderUtil.genPartColumnarRowReader(boolean utcTimestamp,
boolean caseSensitive,
Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
ParquetColumnarRowSplitReader . |
Constructor and Description |
---|
ParquetColumnarRowSplitReader(boolean utcTimestamp,
boolean caseSensitive,
Configuration conf,
LogicalType[] selectedTypes,
String[] selectedFieldNames,
ParquetColumnarRowSplitReader.ColumnBatchGenerator generator,
int batchSize,
org.apache.hadoop.fs.Path path,
long splitStart,
long splitLength) |
Constructor and Description |
---|
SequenceFileWriterFactory(Configuration hadoopConf,
Class<K> keyClass,
Class<V> valueClass)
Creates a new SequenceFileWriterFactory using the given builder to assemble the
SequenceFileWriter.
|
SequenceFileWriterFactory(Configuration hadoopConf,
Class<K> keyClass,
Class<V> valueClass,
String compressionCodecName)
Creates a new SequenceFileWriterFactory using the given builder to assemble the
SequenceFileWriter.
|
SequenceFileWriterFactory(Configuration hadoopConf,
Class<K> keyClass,
Class<V> valueClass,
String compressionCodecName,
org.apache.hadoop.io.SequenceFile.CompressionType compressionType)
Creates a new SequenceFileWriterFactory using the given builder to assemble the
SequenceFileWriter.
|
Modifier and Type | Method and Description |
---|---|
String |
EnvironmentVariableKeyProvider.getStorageAccountKey(String s,
Configuration configuration) |
Modifier and Type | Method and Description |
---|---|
static Configuration |
ConfigUtils.getHadoopConfiguration(Configuration flinkConfig,
ConfigUtils.ConfigContext configContext)
Loads the Hadoop configuration, by loading from a Hadoop conf dir (if one exists) and then
overlaying properties derived from the Flink config.
|
Configuration |
ConfigUtils.ConfigContext.loadHadoopConfigFromDir(String configDir)
Loads the Hadoop configuration from a directory.
|
Modifier and Type | Method and Description |
---|---|
static Optional<com.google.auth.oauth2.GoogleCredentials> |
ConfigUtils.getStorageCredentials(Configuration hadoopConfig,
ConfigUtils.ConfigContext configContext)
Creates an (optional) GoogleCredentials instance for the given Hadoop config and environment.
|
static String |
ConfigUtils.stringifyHadoopConfig(Configuration hadoopConfig)
Helper to serialize a Hadoop config to a string, for logging.
|
Modifier and Type | Method and Description |
---|---|
protected abstract URI |
AbstractS3FileSystemFactory.getInitURI(URI fsUri,
Configuration hadoopConfig) |
Modifier and Type | Method and Description |
---|---|
protected URI |
S3FileSystemFactory.getInitURI(URI fsUri,
Configuration hadoopConfig) |
Constructor and Description |
---|
HadoopS3AccessHelper(org.apache.hadoop.fs.s3a.S3AFileSystem s3a,
Configuration conf) |
Modifier and Type | Method and Description |
---|---|
protected URI |
S3FileSystemFactory.getInitURI(URI fsUri,
Configuration hadoopConfig) |
Modifier and Type | Method and Description |
---|---|
Configuration |
HCatInputFormatBase.getConfiguration()
Returns the
Configuration of the HCatInputFormat. |
Constructor and Description |
---|
HCatInputFormatBase(String database,
String table,
Configuration config)
Creates a HCatInputFormat for the given database, table, and
Configuration . |
Constructor and Description |
---|
HCatInputFormat(String database,
String table,
Configuration config) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcColumnarRowInputFormat.createPartitionedFormat(OrcShim<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> shim,
Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
java.util.function.Function<RowType,TypeInformation<RowData>> rowTypeInfoFactory)
Create a partitioned
OrcColumnarRowInputFormat , the partition columns can be
generated by split. |
static OrcColumnarRowSplitReader<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> |
OrcSplitReaderUtil.genPartColumnarRowReader(String hiveVersion,
Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
OrcColumnarRowSplitReader . |
Constructor and Description |
---|
AbstractOrcFileInputFormat(OrcShim<BatchT> shim,
Configuration hadoopConfig,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize) |
OrcColumnarRowFileInputFormat(OrcShim shim,
Configuration hadoopConfig,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List conjunctPredicates,
int batchSize,
ColumnBatchFactory batchFactory,
TypeInformation producedTypeInfo)
Deprecated.
|
OrcColumnarRowInputFormat(OrcShim<BatchT> shim,
Configuration hadoopConfig,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
ColumnBatchFactory<BatchT,SplitT> batchFactory,
TypeInformation<RowData> producedTypeInfo) |
OrcColumnarRowSplitReader(OrcShim<BATCH> shim,
Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
OrcColumnarRowSplitReader.ColumnBatchGenerator<BATCH> batchGenerator,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength) |
OrcSplitReader(OrcShim<BATCH> shim,
Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcNoHiveColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
java.util.function.Function<RowType,TypeInformation<RowData>> rowTypeInfoFactory)
Create a partitioned
OrcColumnarRowInputFormat , the partition columns can be
generated by split. |
static OrcColumnarRowSplitReader<org.apache.orc.storage.ql.exec.vector.VectorizedRowBatch> |
OrcNoHiveSplitReaderUtil.genPartColumnarRowReader(Configuration conf,
String[] fullFieldNames,
DataType[] fullFieldTypes,
Map<String,Object> partitionSpec,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
Path path,
long splitStart,
long splitLength)
Util for generating partitioned
OrcColumnarRowSplitReader . |
Constructor and Description |
---|
OrcNoHiveBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
Modifier and Type | Method and Description |
---|---|
org.apache.orc.RecordReader |
OrcNoHiveShim.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength) |
Modifier and Type | Method and Description |
---|---|
protected org.apache.orc.Reader |
OrcShimV200.createReader(org.apache.hadoop.fs.Path path,
Configuration conf) |
protected org.apache.orc.Reader |
OrcShimV230.createReader(org.apache.hadoop.fs.Path path,
Configuration conf) |
org.apache.orc.RecordReader |
OrcShim.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength)
Create orc
RecordReader from conf, schema and etc... |
org.apache.orc.RecordReader |
OrcShimV200.createRecordReader(Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
Path path,
long splitStart,
long splitLength) |
protected org.apache.orc.Reader.Options |
OrcShimV200.readOrcConf(org.apache.orc.Reader.Options options,
Configuration conf) |
protected org.apache.orc.Reader.Options |
OrcShimV230.readOrcConf(org.apache.orc.Reader.Options options,
Configuration conf) |
Modifier and Type | Method and Description |
---|---|
Configuration |
SerializableHadoopConfigWrapper.getHadoopConfig() |
Modifier and Type | Method and Description |
---|---|
static TableStats |
OrcFormatStatisticsReportUtil.getTableStatistics(List<Path> files,
DataType producedDataType,
Configuration hadoopConfig) |
Constructor and Description |
---|
SerializableHadoopConfigWrapper(Configuration hadoopConfig) |
Modifier and Type | Class and Description |
---|---|
class |
ThreadLocalClassLoaderConfiguration
Workaround for https://issues.apache.org/jira/browse/ORC-653.
|
Constructor and Description |
---|
OrcBulkWriterFactory(Vectorizer<T> vectorizer,
Configuration configuration)
Creates a new OrcBulkWriterFactory using the provided Vectorizer, Hadoop Configuration.
|
OrcBulkWriterFactory(Vectorizer<T> vectorizer,
Properties writerProperties,
Configuration configuration)
Creates a new OrcBulkWriterFactory using the provided Vectorizer, Hadoop Configuration, ORC
writer properties.
|
ThreadLocalClassLoaderConfiguration(Configuration other) |
Constructor and Description |
---|
HadoopModule(SecurityConfiguration securityConfiguration,
Configuration hadoopConfiguration) |
Modifier and Type | Method and Description |
---|---|
static Configuration |
HadoopUtils.getHadoopConfiguration(Configuration flinkConfiguration) |
Configuration |
HadoopConfigLoader.getOrLoadHadoopConfig()
get the loaded Hadoop config (or fall back to one loaded from the classpath).
|
Modifier and Type | Method and Description |
---|---|
Configuration |
SerializableConfiguration.getConfiguration() |
Modifier and Type | Method and Description |
---|---|
T |
HadoopPathBasedBulkFormatBuilder.withConfiguration(Configuration configuration) |
Constructor and Description |
---|
HadoopPathBasedBulkFormatBuilder(org.apache.hadoop.fs.Path basePath,
HadoopPathBasedBulkWriter.Factory<IN> writerFactory,
Configuration configuration,
BucketAssigner<IN,BucketID> assigner) |
HadoopPathBasedBulkFormatBuilder(org.apache.hadoop.fs.Path basePath,
HadoopPathBasedBulkWriter.Factory<IN> writerFactory,
HadoopFileCommitterFactory fileCommitterFactory,
Configuration configuration,
BucketAssigner<IN,BucketID> assigner,
CheckpointRollingPolicy<IN,BucketID> policy,
BucketFactory<IN,BucketID> bucketFactory,
OutputFileConfig outputFileConfig) |
SerializableConfiguration(Configuration configuration) |
Modifier and Type | Method and Description |
---|---|
BulkWriter.Factory<RowData> |
HiveShimV200.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
BulkWriter.Factory<RowData> |
HiveShim.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes)
Create orc
BulkWriter.Factory for different hive versions. |
BulkWriter.Factory<RowData> |
HiveShimV100.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
void |
HiveShim.createTableWithConstraints(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits)
Creates a table with PK and NOT NULL constraints.
|
void |
HiveShimV100.createTableWithConstraints(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits) |
void |
HiveShimV210.createTableWithConstraints(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits) |
void |
HiveShimV310.createTableWithConstraints(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits) |
void |
HiveMetastoreClientWrapper.createTableWithConstraints(org.apache.hadoop.hive.metastore.api.Table table,
Configuration conf,
UniqueConstraint pk,
List<Byte> pkTraits,
List<String> notNullCols,
List<Byte> nnTraits) |
List<org.apache.hadoop.hive.metastore.api.FieldSchema> |
HiveShim.getFieldsFromDeserializer(Configuration conf,
org.apache.hadoop.hive.metastore.api.Table table,
boolean skipConfError)
Get Hive table schema from deserializer.
|
List<org.apache.hadoop.hive.metastore.api.FieldSchema> |
HiveShimV100.getFieldsFromDeserializer(Configuration conf,
org.apache.hadoop.hive.metastore.api.Table table,
boolean skipConfError) |
List<org.apache.hadoop.hive.metastore.api.FieldSchema> |
HiveShimV110.getFieldsFromDeserializer(Configuration conf,
org.apache.hadoop.hive.metastore.api.Table table,
boolean skipConfError) |
Set<String> |
HiveMetastoreClientWrapper.getNotNullColumns(Configuration conf,
String dbName,
String tableName) |
Set<String> |
HiveShim.getNotNullColumns(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
Configuration conf,
String dbName,
String tableName)
Get the set of columns that have NOT NULL constraints.
|
Set<String> |
HiveShimV100.getNotNullColumns(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
Configuration conf,
String dbName,
String tableName) |
Set<String> |
HiveShimV310.getNotNullColumns(org.apache.hadoop.hive.metastore.IMetaStoreClient client,
Configuration conf,
String dbName,
String tableName) |
Modifier and Type | Method and Description |
---|---|
static Configuration |
HiveTableUtil.getHadoopConfiguration(String hadoopConfDir)
Returns a new Hadoop Configuration object using the path to the hadoop conf configured.
|
Modifier and Type | Method and Description |
---|---|
static boolean |
HiveParserUtils.legacyGrouping(Configuration conf) |
Modifier and Type | Method and Description |
---|---|
Configuration |
HiveParserContext.getConf() |
Constructor and Description |
---|
HiveParserContext(Configuration conf)
Create a HiveParserContext with a given executionId.
|
HiveParserStorageFormat(Configuration conf) |
HiveParserUnparseTranslator(Configuration conf) |
Modifier and Type | Method and Description |
---|---|
void |
HiveASTLexer.setHiveConf(Configuration hiveConf) |
void |
HiveASTParser.setHiveConf(Configuration hiveConf) |
Modifier and Type | Method and Description |
---|---|
static void |
Utils.setTokensFor(org.apache.hadoop.yarn.api.records.ContainerLaunchContext amContainer,
List<org.apache.hadoop.fs.Path> paths,
Configuration conf,
boolean obtainingDelegationTokens) |
static void |
Utils.setupYarnClassPath(Configuration conf,
Map<String,String> appMasterEnv) |
Modifier and Type | Method and Description |
---|---|
void |
Configuration.addResource(Configuration conf)
Add a configuration resource.
|
static void |
Configuration.dumpConfiguration(Configuration config,
String propertyName,
Writer out)
Writes properties and their attributes (final and resource) to the given
Writer . |
static void |
Configuration.dumpConfiguration(Configuration config,
Writer out)
Writes out all properties and their attributes (final and resource) to the given
Writer , the format of the output would be, |
Constructor and Description |
---|
Configuration(Configuration other)
A new configuration with the same settings cloned from another.
|
Modifier and Type | Class and Description |
---|---|
class |
HiveConf
Hive Configuration.
|
Modifier and Type | Method and Description |
---|---|
static String |
HiveConf.StrictChecks.checkBucketing(Configuration conf) |
static String |
HiveConf.StrictChecks.checkCartesian(Configuration conf) |
static String |
HiveConf.StrictChecks.checkNoLimit(Configuration conf) |
static String |
HiveConf.StrictChecks.checkNoPartitionFilter(Configuration conf) |
static String |
HiveConf.StrictChecks.checkTypeSafety(Configuration conf) |
static boolean |
HiveConf.getBoolVar(Configuration conf,
HiveConf.ConfVars var) |
static boolean |
HiveConf.getBoolVar(Configuration conf,
HiveConf.ConfVars var,
boolean defaultVal) |
static float |
HiveConf.getFloatVar(Configuration conf,
HiveConf.ConfVars var) |
static float |
HiveConf.getFloatVar(Configuration conf,
HiveConf.ConfVars var,
float defaultVal) |
static int |
HiveConf.getIntVar(Configuration conf,
HiveConf.ConfVars var) |
static long |
HiveConf.getLongVar(Configuration conf,
HiveConf.ConfVars var) |
static long |
HiveConf.getLongVar(Configuration conf,
HiveConf.ConfVars var,
long defaultVal) |
static Properties |
HiveConf.getProperties(Configuration conf) |
static String |
HiveConf.getQueryString(Configuration conf) |
static long |
HiveConf.getSizeVar(Configuration conf,
HiveConf.ConfVars var) |
static long |
HiveConf.getTimeVar(Configuration conf,
HiveConf.ConfVars var,
TimeUnit outUnit) |
static String[] |
HiveConf.getTrimmedStringsVar(Configuration conf,
HiveConf.ConfVars var) |
static String |
HiveConf.getTrimmedVar(Configuration conf,
HiveConf.ConfVars var) |
static String |
HiveConf.getVar(Configuration conf,
HiveConf.ConfVars var) |
static String |
HiveConf.getVar(Configuration conf,
HiveConf.ConfVars var,
HiveConf.EncoderDecoder<String,String> encoderDecoder) |
static String |
HiveConf.getVar(Configuration conf,
HiveConf.ConfVars var,
String defaultVal) |
static String |
HiveConf.getVarWithoutType(Configuration conf,
HiveConf.ConfVars var) |
static boolean |
HiveConf.isSparkDPPAny(Configuration conf) |
static void |
HiveConf.setBoolVar(Configuration conf,
HiveConf.ConfVars var,
boolean val) |
static void |
HiveConf.setFloatVar(Configuration conf,
HiveConf.ConfVars var,
float val) |
static void |
HiveConf.setIntVar(Configuration conf,
HiveConf.ConfVars var,
int val) |
static void |
HiveConf.setLongVar(Configuration conf,
HiveConf.ConfVars var,
long val) |
static void |
HiveConf.setQueryString(Configuration conf,
String query) |
static void |
HiveConf.setTimeVar(Configuration conf,
HiveConf.ConfVars var,
long time,
TimeUnit timeunit) |
static void |
HiveConf.setVar(Configuration conf,
HiveConf.ConfVars var,
String val) |
static void |
HiveConf.setVar(Configuration conf,
HiveConf.ConfVars var,
String val,
HiveConf.EncoderDecoder<String,String> encoderDecoder) |
void |
HiveConf.stripHiddenConfigurations(Configuration conf)
Strips hidden config entries from configuration
|
Constructor and Description |
---|
HiveConf(Configuration other,
Class<?> cls) |
Modifier and Type | Field and Description |
---|---|
protected Configuration |
HiveMetaStoreClient.conf |
Modifier and Type | Method and Description |
---|---|
boolean |
HiveMetaStoreClient.isCompatibleWith(Configuration conf) |
boolean |
HiveMetaStoreClient.isSameConfObj(Configuration c) |
Constructor and Description |
---|
HiveMetaStoreClient(Configuration conf) |
HiveMetaStoreClient(Configuration conf,
org.apache.hadoop.hive.metastore.HiveMetaHookLoader hookLoader) |
HiveMetaStoreClient(Configuration conf,
org.apache.hadoop.hive.metastore.HiveMetaHookLoader hookLoader,
Boolean allowEmbedded) |
Copyright © 2014–2024 The Apache Software Foundation. All rights reserved.