Modifier and Type | Method and Description |
---|---|
TableSchema |
HBaseUpsertTableSink.getTableSchema() |
TableSchema |
HBaseTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
TableSchema |
JDBCTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
JDBCTableSource.Builder |
JDBCTableSource.Builder.setSchema(TableSchema schema)
required, table schema of this table source.
|
JDBCUpsertTableSink.Builder |
JDBCUpsertTableSink.Builder.setTableSchema(TableSchema schema)
required, table schema of this table source.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
HiveTableSource.getTableSchema() |
TableSchema |
HiveTableSink.getTableSchema() |
Constructor and Description |
---|
HiveOutputFormatFactory(org.apache.hadoop.mapred.JobConf jobConf,
Class hiveOutputFormatClz,
org.apache.hadoop.hive.metastore.api.SerDeInfo serDeInfo,
TableSchema schema,
String[] partitionColumns,
Properties tableProperties,
HiveShim hiveShim,
boolean isCompressed) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
ParquetTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
protected TableSchema |
Mapper.getDataSchema()
Get the schema of input rows.
|
protected TableSchema |
ModelMapper.getModelSchema()
Get the schema of the model rows that are passed to
ModelMapper.loadModel(List) . |
abstract TableSchema |
Mapper.getOutputSchema()
Get the schema of the output rows of
Mapper.map(Row) method. |
Constructor and Description |
---|
Mapper(TableSchema dataSchema,
Params params)
Construct a Mapper.
|
ModelMapper(TableSchema modelSchema,
TableSchema dataSchema,
Params params)
Constructs a ModelMapper.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
AlgoOperator.getSchema()
Returns the schema of the output table.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
OrcTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
TableSchema |
StreamSQLTestProgram.GeneratorTableSource.getTableSchema() |
TableSchema |
BatchSQLTestProgram.GeneratorTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
protected abstract ElasticsearchUpsertTableSinkBase |
ElasticsearchUpsertTableSinkBase.copy(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
protected abstract ElasticsearchUpsertTableSinkBase |
ElasticsearchUpsertTableSinkFactoryBase.createElasticsearchUpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Constructor and Description |
---|
ElasticsearchUpsertTableSinkBase(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
Modifier and Type | Method and Description |
---|---|
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch6UpsertTableSink.copy(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch6UpsertTableSinkFactory.createElasticsearchUpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Constructor and Description |
---|
Elasticsearch6UpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Modifier and Type | Method and Description |
---|---|
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch7UpsertTableSink.copy(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch7UpsertTableSinkFactory.createElasticsearchUpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Constructor and Description |
---|
Elasticsearch7UpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
KafkaTableSourceBase.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
protected KafkaTableSinkBase |
Kafka08TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
KafkaTableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka011TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka010TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka09TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected abstract KafkaTableSinkBase |
KafkaTableSourceSinkFactoryBase.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema)
Constructs the version-specific Kafka table sink.
|
protected KafkaTableSourceBase |
Kafka08TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected KafkaTableSourceBase |
KafkaTableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected KafkaTableSourceBase |
Kafka011TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected KafkaTableSourceBase |
Kafka010TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected KafkaTableSourceBase |
Kafka09TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected abstract KafkaTableSourceBase |
KafkaTableSourceSinkFactoryBase.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets)
Constructs the version-specific Kafka table source.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
TableSchema.Builder.build()
Returns a
TableSchema instance. |
TableSchema |
TableSchema.copy()
Returns a deep copy of the table schema.
|
static TableSchema |
TableSchema.fromTypeInfo(TypeInformation<?> typeInfo)
Deprecated.
This method will be removed soon. Use
DataTypes to declare types. |
TableSchema |
Table.getSchema()
Returns the schema of this table.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
TableImpl.getSchema() |
Modifier and Type | Method and Description |
---|---|
static <T1> TableSchema |
ConnectorCatalogTable.calculateSourceSchema(TableSource<T1> source,
boolean isBatch) |
TableSchema |
AbstractCatalogTable.getSchema() |
TableSchema |
AbstractCatalogView.getSchema() |
TableSchema |
CatalogBaseTable.getSchema()
Get the schema of the table.
|
Constructor and Description |
---|
AbstractCatalogTable(TableSchema tableSchema,
List<String> partitionKeys,
Map<String,String> properties,
String comment) |
AbstractCatalogTable(TableSchema tableSchema,
Map<String,String> properties,
String comment) |
AbstractCatalogView(String originalQuery,
String expandedQuery,
TableSchema schema,
Map<String,String> properties,
String comment) |
CatalogTableBuilder(ConnectorDescriptor connectorDescriptor,
TableSchema tableSchema) |
CatalogTableImpl(TableSchema tableSchema,
List<String> partitionKeys,
Map<String,String> properties,
String comment) |
CatalogTableImpl(TableSchema tableSchema,
Map<String,String> properties,
String comment) |
CatalogViewImpl(String originalQuery,
String expandedQuery,
TableSchema schema,
Map<String,String> properties,
String comment) |
ConnectorCatalogTable(TableSource<T1> tableSource,
TableSink<T2> tableSink,
TableSchema tableSchema,
boolean isBatch) |
Modifier and Type | Method and Description |
---|---|
static TableSchema |
HiveTableUtil.createTableSchema(List<org.apache.hadoop.hive.metastore.api.FieldSchema> cols,
List<org.apache.hadoop.hive.metastore.api.FieldSchema> partitionKeys,
Set<String> notNullColumns,
UniqueConstraint primaryKey)
Create a Flink's TableSchema from Hive table's columns and partition keys.
|
Modifier and Type | Method and Description |
---|---|
static List<org.apache.hadoop.hive.metastore.api.FieldSchema> |
HiveTableUtil.createHiveColumns(TableSchema schema)
Create Hive columns from Flink TableSchema.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
ResultDescriptor.getResultSchema() |
TableSchema |
Executor.getTableSchema(String sessionId,
String name)
Returns the schema of a table.
|
Constructor and Description |
---|
ResultDescriptor(String resultId,
TableSchema resultSchema,
boolean isMaterialized) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
CollectBatchTableSink.getTableSchema() |
TableSchema |
LocalExecutor.getTableSchema(String sessionId,
String name) |
Modifier and Type | Method and Description |
---|---|
<T> DynamicResult<T> |
ResultStore.createResult(Environment env,
TableSchema schema,
ExecutionConfig config,
ClassLoader classLoader)
Creates a result.
|
Constructor and Description |
---|
CollectBatchTableSink(String accumulatorName,
TypeSerializer<Row> serializer,
TableSchema tableSchema) |
CollectStreamTableSink(InetAddress targetAddress,
int targetPort,
TypeSerializer<Tuple2<Boolean,Row>> serializer,
TableSchema tableSchema) |
Constructor and Description |
---|
ChangelogCollectStreamResult(TableSchema tableSchema,
ExecutionConfig config,
InetAddress gatewayAddress,
int gatewayPort,
ClassLoader classLoader) |
CollectStreamResult(TableSchema tableSchema,
ExecutionConfig config,
InetAddress gatewayAddress,
int gatewayPort,
ClassLoader classLoader) |
MaterializedCollectBatchResult(TableSchema tableSchema,
ExecutionConfig config,
ClassLoader classLoader) |
MaterializedCollectStreamResult(TableSchema tableSchema,
ExecutionConfig config,
InetAddress gatewayAddress,
int gatewayPort,
int maxRowCount,
ClassLoader classLoader) |
MaterializedCollectStreamResult(TableSchema tableSchema,
ExecutionConfig config,
InetAddress gatewayAddress,
int gatewayPort,
int maxRowCount,
int overcommitThreshold,
ClassLoader classLoader) |
Modifier and Type | Method and Description |
---|---|
static TableSchema |
SchemaValidator.deriveTableSinkSchema(DescriptorProperties properties)
Deprecated.
This method combines two separate concepts of table schema and field mapping.
This should be split into two methods once we have support for
the corresponding interfaces (see FLINK-9870).
|
TableSchema |
DescriptorProperties.getTableSchema(String key)
Returns a table schema under the given existing key.
|
Modifier and Type | Method and Description |
---|---|
Optional<TableSchema> |
DescriptorProperties.getOptionalTableSchema(String key)
Returns a table schema under the given key if it exists.
|
Modifier and Type | Method and Description |
---|---|
void |
DescriptorProperties.putTableSchema(String key,
TableSchema schema)
Adds a table schema under the given key.
|
OldCsv |
OldCsv.schema(TableSchema schema)
Deprecated.
Sets the format schema with field names and the types.
|
Schema |
Schema.schema(TableSchema schema)
Sets the schema with field names and the types.
|
Modifier and Type | Method and Description |
---|---|
static TableSchema |
TableFormatFactoryBase.deriveSchema(Map<String,String> properties)
Finds the table schema that can be used for a format schema (without time attributes and generated columns).
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
ScalaDataStreamQueryOperation.getTableSchema() |
TableSchema |
JavaDataStreamQueryOperation.getTableSchema() |
TableSchema |
DataSetQueryOperation.getTableSchema() |
TableSchema |
DistinctQueryOperation.getTableSchema() |
TableSchema |
FilterQueryOperation.getTableSchema() |
TableSchema |
SortQueryOperation.getTableSchema() |
TableSchema |
TableSourceQueryOperation.getTableSchema() |
TableSchema |
JoinQueryOperation.getTableSchema() |
TableSchema |
CatalogQueryOperation.getTableSchema() |
TableSchema |
WindowAggregateQueryOperation.getTableSchema() |
TableSchema |
ProjectQueryOperation.getTableSchema() |
TableSchema |
AggregateQueryOperation.getTableSchema() |
TableSchema |
SetQueryOperation.getTableSchema() |
TableSchema |
QueryOperation.getTableSchema()
Resolved schema of this operation.
|
TableSchema |
CalculatedQueryOperation.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
TableSchema |
DataStreamQueryOperation.getTableSchema() |
TableSchema |
PlannerQueryOperation.getTableSchema() |
Constructor and Description |
---|
DataStreamQueryOperation(ObjectIdentifier identifier,
DataStream<E> dataStream,
int[] fieldIndices,
TableSchema tableSchema,
boolean[] fieldNullables,
boolean producesUpdates,
boolean isAccRetract,
org.apache.flink.table.planner.plan.stats.FlinkStatistic statistic) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
CsvTableSink.getTableSchema() |
default TableSchema |
TableSink.getTableSchema()
Returns the schema of the consumed table.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
CsvTableSource.getTableSchema() |
TableSchema |
TableSource.getTableSchema()
Deprecated.
Table schema is a logical description of a table and should not be part of the physical TableSource.
Define schema when registering a Table either in DDL or in
TableEnvironment#connect(...) . |
Modifier and Type | Method and Description |
---|---|
static void |
TableSourceValidation.validateTableSource(TableSource<?> tableSource,
TableSchema schema)
Validates a TableSource.
|
Modifier and Type | Method and Description |
---|---|
static TableSchema |
DataTypeUtils.expandCompositeTypeToSchema(DataType dataType)
Expands a composite
DataType to a corresponding TableSchema . |
Modifier and Type | Method and Description |
---|---|
TableSchema |
FieldInfoUtils.TypeInfoSchema.toTableSchema() |
Modifier and Type | Method and Description |
---|---|
static TableSchema |
TableSchemaUtils.checkNoGeneratedColumns(TableSchema schema)
Throws exception if the given
TableSchema contains any generated columns. |
static TableSchema |
TableSchemaUtils.getPhysicalSchema(TableSchema tableSchema)
Return
TableSchema which consists of all physical columns. |
Modifier and Type | Method and Description |
---|---|
static TableSchema |
TableSchemaUtils.checkNoGeneratedColumns(TableSchema schema)
Throws exception if the given
TableSchema contains any generated columns. |
static boolean |
TableSchemaUtils.containsGeneratedColumns(TableSchema schema)
Returns true if there are any generated columns in the given
TableColumn . |
static TableSchema |
TableSchemaUtils.getPhysicalSchema(TableSchema tableSchema)
Return
TableSchema which consists of all physical columns. |
Modifier and Type | Method and Description |
---|---|
TableSchema |
SpendReportTableSink.getTableSchema() |
TableSchema |
UnboundedTransactionTableSource.getTableSchema() |
TableSchema |
BoundedTransactionTableSource.getTableSchema() |
Copyright © 2014–2020 The Apache Software Foundation. All rights reserved.