Modifier and Type | Method and Description |
---|---|
TableSchema |
HBaseUpsertTableSink.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
TableSchema |
HBaseTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
TableSchema |
HBaseTableSchema.convertsToTableSchema()
Converts this
HBaseTableSchema to TableSchema , the fields are consisted of
families and rowkey, the order is in the definition order (i.e. |
Modifier and Type | Method and Description |
---|---|
static HBaseTableSchema |
HBaseTableSchema.fromTableSchema(TableSchema schema)
Construct a
HBaseTableSchema from a TableSchema . |
Modifier and Type | Method and Description |
---|---|
default void |
JdbcDialect.validate(TableSchema schema)
Check if this dialect instance support a specific data type in table schema.
|
Modifier and Type | Field and Description |
---|---|
protected TableSchema |
JdbcTableSource.Builder.schema |
protected TableSchema |
JdbcUpsertTableSink.Builder.schema |
Modifier and Type | Method and Description |
---|---|
TableSchema |
JdbcTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
JdbcTableSource.Builder |
JdbcTableSource.Builder.setSchema(TableSchema schema)
required, table schema of this table source.
|
JdbcUpsertTableSink.Builder |
JdbcUpsertTableSink.Builder.setTableSchema(TableSchema schema)
required, table schema of this table source.
|
Constructor and Description |
---|
JdbcDynamicTableSink(JdbcOptions jdbcOptions,
JdbcExecutionOptions executionOptions,
JdbcDmlOptions dmlOptions,
TableSchema tableSchema) |
JdbcDynamicTableSource(JdbcOptions options,
JdbcReadOptions readOptions,
JdbcLookupOptions lookupOptions,
TableSchema physicalSchema) |
Modifier and Type | Method and Description |
---|---|
static TableSchema |
JdbcTypeUtil.normalizeTableSchema(TableSchema schema)
The original table schema may contain generated columns which shouldn't be produced/consumed
by TableSource/TableSink.
|
Modifier and Type | Method and Description |
---|---|
static TableSchema |
JdbcTypeUtil.normalizeTableSchema(TableSchema schema)
The original table schema may contain generated columns which shouldn't be produced/consumed
by TableSource/TableSink.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
HiveTableSink.getTableSchema() |
TableSchema |
HiveTableSource.getTableSchema() |
Constructor and Description |
---|
HiveWriterFactory(org.apache.hadoop.mapred.JobConf jobConf,
Class hiveOutputFormatClz,
org.apache.hadoop.hive.metastore.api.SerDeInfo serDeInfo,
TableSchema schema,
String[] partitionColumns,
Properties tableProperties,
HiveShim hiveShim,
boolean isCompressed) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
ParquetTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
protected TableSchema |
Mapper.getDataSchema()
Get the schema of input rows.
|
protected TableSchema |
ModelMapper.getModelSchema()
Get the schema of the model rows that are passed to
ModelMapper.loadModel(List) . |
abstract TableSchema |
Mapper.getOutputSchema()
Get the schema of the output rows of
Mapper.map(Row) method. |
Constructor and Description |
---|
Mapper(TableSchema dataSchema,
Params params)
Construct a Mapper.
|
ModelMapper(TableSchema modelSchema,
TableSchema dataSchema,
Params params)
Constructs a ModelMapper.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
OutputColsHelper.getResultSchema()
Get the result table schema.
|
Modifier and Type | Method and Description |
---|---|
static void |
TableUtil.assertNumericalCols(TableSchema tableSchema,
String... selectedCols)
Check whether colTypes of the
selectedCols is numerical, if not, throw
exception. |
static void |
TableUtil.assertStringCols(TableSchema tableSchema,
String... selectedCols)
Check whether colTypes of the
selectedCols is string, if not, throw exception. |
static void |
TableUtil.assertVectorCols(TableSchema tableSchema,
String... selectedCols)
Check whether colTypes of the
selectedCols is vector, if not, throw exception. |
static int |
TableUtil.findColIndex(TableSchema tableSchema,
String targetCol)
Find the index of
targetCol from the tableSchema . |
static int[] |
TableUtil.findColIndices(TableSchema tableSchema,
String[] targetCols)
Find the indices of
targetCols from the tableSchema . |
static TypeInformation<?> |
TableUtil.findColType(TableSchema tableSchema,
String targetCol)
Find the type of the
targetCol . |
static TypeInformation<?>[] |
TableUtil.findColTypes(TableSchema tableSchema,
String[] targetCols)
Find the types of the
targetCols . |
static String[] |
TableUtil.getCategoricalCols(TableSchema tableSchema,
String[] featureCols,
String[] categoricalCols)
Get the columns from featureCols who are included in the
categoricalCols , and
the columns whose types are string or boolean. |
static String[] |
TableUtil.getNumericCols(TableSchema tableSchema)
Return the columns in the table whose types are numeric.
|
static String[] |
TableUtil.getNumericCols(TableSchema tableSchema,
String[] excludeCols)
Return the columns in the table whose types are numeric and are not included in the
excludeCols.
|
static String[] |
TableUtil.getStringCols(TableSchema tableSchema)
Return the columns in the table whose types are string.
|
static String[] |
TableUtil.getStringCols(TableSchema tableSchema,
String[] excludeCols)
Return the columns in the table whose types are string and are not included in the
excludeCols.
|
static Table |
DataSetConversionUtil.toTable(Long sessionId,
DataSet<Row> data,
TableSchema schema)
Convert the given DataSet into a Table with specified TableSchema.
|
static Table |
DataStreamConversionUtil.toTable(Long sessionId,
DataStream<Row> data,
TableSchema schema)
Convert the given DataStream to Table with specified TableSchema.
|
Constructor and Description |
---|
OutputColsHelper(TableSchema inputSchema,
String[] outputColNames,
TypeInformation<?>[] outputColTypes) |
OutputColsHelper(TableSchema inputSchema,
String[] outputColNames,
TypeInformation<?>[] outputColTypes,
String[] reservedColNames)
The constructor.
|
OutputColsHelper(TableSchema inputSchema,
String outputColName,
TypeInformation<?> outputColType) |
OutputColsHelper(TableSchema inputSchema,
String outputColName,
TypeInformation<?> outputColType,
String[] reservedColNames) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
AlgoOperator.getSchema()
Returns the schema of the output table.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
OrcTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
TableSchema |
StreamSQLTestProgram.GeneratorTableSource.getTableSchema() |
TableSchema |
BatchSQLTestProgram.GeneratorTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
protected abstract ElasticsearchUpsertTableSinkBase |
ElasticsearchUpsertTableSinkBase.copy(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
protected abstract ElasticsearchUpsertTableSinkBase |
ElasticsearchUpsertTableSinkFactoryBase.createElasticsearchUpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Constructor and Description |
---|
ElasticsearchUpsertTableSinkBase(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
Modifier and Type | Method and Description |
---|---|
static IndexGenerator |
IndexGeneratorFactory.createIndexGenerator(String index,
TableSchema schema) |
Modifier and Type | Method and Description |
---|---|
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch6UpsertTableSink.copy(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch6UpsertTableSinkFactory.createElasticsearchUpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Constructor and Description |
---|
Elasticsearch6UpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Modifier and Type | Method and Description |
---|---|
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch7UpsertTableSink.copy(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch7UpsertTableSinkFactory.createElasticsearchUpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Constructor and Description |
---|
Elasticsearch7UpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
KafkaTableSourceBase.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
protected KafkaTableSinkBase |
KafkaTableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka011TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka010TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected abstract KafkaTableSinkBase |
KafkaTableSourceSinkFactoryBase.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema)
Constructs the version-specific Kafka table sink.
|
protected KafkaTableSourceBase |
KafkaTableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis) |
protected KafkaTableSourceBase |
Kafka011TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis) |
protected KafkaTableSourceBase |
Kafka010TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis) |
protected abstract KafkaTableSourceBase |
KafkaTableSourceSinkFactoryBase.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis)
Constructs the version-specific Kafka table source.
|
Constructor and Description |
---|
Kafka010TableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
Kafka010TableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Optional<Map<String,String>> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis)
Creates a Kafka 0.10
StreamTableSource . |
Kafka010TableSource(TableSchema schema,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema)
Creates a Kafka 0.10
StreamTableSource . |
Kafka011TableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
Kafka011TableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Optional<Map<String,String>> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis)
Creates a Kafka 0.11
StreamTableSource . |
Kafka011TableSource(TableSchema schema,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema)
Creates a Kafka 0.11
StreamTableSource . |
KafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
KafkaTableSinkBase(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
KafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Optional<Map<String,String>> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis)
Creates a generic Kafka
StreamTableSource . |
KafkaTableSource(TableSchema schema,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema)
Creates a generic Kafka
StreamTableSource . |
KafkaTableSourceBase(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Optional<Map<String,String>> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis)
Creates a generic Kafka
StreamTableSource . |
KafkaTableSourceBase(TableSchema schema,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema)
Creates a generic Kafka
StreamTableSource . |
Modifier and Type | Method and Description |
---|---|
TableSchema |
TableSchema.Builder.build()
Returns a
TableSchema instance. |
TableSchema |
TableSchema.copy()
Returns a deep copy of the table schema.
|
static TableSchema |
TableSchema.fromTypeInfo(TypeInformation<?> typeInfo)
Deprecated.
This method will be removed soon. Use
DataTypes to declare types. |
TableSchema |
Table.getSchema()
Returns the schema of this table.
|
TableSchema |
TableResult.getTableSchema()
Get the schema of result.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
TableImpl.getSchema() |
TableSchema |
CatalogTableSchemaResolver.resolve(TableSchema tableSchema)
Resolve the computed column's type for the given schema.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
CatalogTableSchemaResolver.resolve(TableSchema tableSchema)
Resolve the computed column's type for the given schema.
|
Modifier and Type | Method and Description |
---|---|
static <T1> TableSchema |
ConnectorCatalogTable.calculateSourceSchema(TableSource<T1> source,
boolean isBatch) |
TableSchema |
CatalogManager.TableLookupResult.getResolvedSchema() |
TableSchema |
AbstractCatalogView.getSchema() |
TableSchema |
AbstractCatalogTable.getSchema() |
TableSchema |
CatalogBaseTable.getSchema()
Get the schema of the table.
|
Modifier and Type | Method and Description |
---|---|
static QueryOperationCatalogViewTable |
QueryOperationCatalogViewTable.createCalciteTable(QueryOperationCatalogView catalogView,
TableSchema resolvedSchema) |
static CatalogManager.TableLookupResult |
CatalogManager.TableLookupResult.permanent(CatalogBaseTable table,
TableSchema resolvedSchema) |
static Map<String,String> |
CatalogTableImpl.removeRedundant(Map<String,String> properties,
TableSchema schema,
List<String> partitionKeys)
Construct catalog table properties from
CatalogTableImpl.toProperties() . |
static CatalogManager.TableLookupResult |
CatalogManager.TableLookupResult.temporary(CatalogBaseTable table,
TableSchema resolvedSchema) |
Constructor and Description |
---|
AbstractCatalogTable(TableSchema tableSchema,
List<String> partitionKeys,
Map<String,String> properties,
String comment) |
AbstractCatalogTable(TableSchema tableSchema,
Map<String,String> properties,
String comment) |
AbstractCatalogView(String originalQuery,
String expandedQuery,
TableSchema schema,
Map<String,String> properties,
String comment) |
CatalogTableBuilder(ConnectorDescriptor connectorDescriptor,
TableSchema tableSchema) |
CatalogTableImpl(TableSchema tableSchema,
List<String> partitionKeys,
Map<String,String> properties,
String comment) |
CatalogTableImpl(TableSchema tableSchema,
Map<String,String> properties,
String comment) |
CatalogViewImpl(String originalQuery,
String expandedQuery,
TableSchema schema,
Map<String,String> properties,
String comment) |
ConnectorCatalogTable(TableSource<T1> tableSource,
TableSink<T2> tableSink,
TableSchema tableSchema,
boolean isBatch) |
Modifier and Type | Method and Description |
---|---|
static TableSchema |
HiveTableUtil.createTableSchema(List<org.apache.hadoop.hive.metastore.api.FieldSchema> cols,
List<org.apache.hadoop.hive.metastore.api.FieldSchema> partitionKeys,
Set<String> notNullColumns,
UniqueConstraint primaryKey)
Create a Flink's TableSchema from Hive table's columns and partition keys.
|
Modifier and Type | Method and Description |
---|---|
static List<org.apache.hadoop.hive.metastore.api.FieldSchema> |
HiveTableUtil.createHiveColumns(TableSchema schema)
Create Hive columns from Flink TableSchema.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
ResultDescriptor.getResultSchema() |
TableSchema |
Executor.getTableSchema(String sessionId,
String name)
Returns the schema of a table.
|
Constructor and Description |
---|
ResultDescriptor(String resultId,
TableSchema resultSchema,
boolean isMaterialized,
boolean isTableauMode) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
CollectBatchTableSink.getTableSchema() |
TableSchema |
LocalExecutor.getTableSchema(String sessionId,
String name) |
Modifier and Type | Method and Description |
---|---|
<T> DynamicResult<T> |
ResultStore.createResult(Environment env,
TableSchema schema,
ExecutionConfig config,
ClassLoader classLoader)
Creates a result.
|
Constructor and Description |
---|
CollectBatchTableSink(String accumulatorName,
TypeSerializer<Row> serializer,
TableSchema tableSchema) |
CollectStreamTableSink(InetAddress targetAddress,
int targetPort,
TypeSerializer<Tuple2<Boolean,Row>> serializer,
TableSchema tableSchema) |
Constructor and Description |
---|
ChangelogCollectStreamResult(TableSchema tableSchema,
ExecutionConfig config,
InetAddress gatewayAddress,
int gatewayPort,
ClassLoader classLoader) |
CollectStreamResult(TableSchema tableSchema,
ExecutionConfig config,
InetAddress gatewayAddress,
int gatewayPort,
ClassLoader classLoader) |
MaterializedCollectBatchResult(TableSchema tableSchema,
ExecutionConfig config,
ClassLoader classLoader) |
MaterializedCollectStreamResult(TableSchema tableSchema,
ExecutionConfig config,
InetAddress gatewayAddress,
int gatewayPort,
int maxRowCount,
ClassLoader classLoader) |
MaterializedCollectStreamResult(TableSchema tableSchema,
ExecutionConfig config,
InetAddress gatewayAddress,
int gatewayPort,
int maxRowCount,
int overcommitThreshold,
ClassLoader classLoader) |
Modifier and Type | Method and Description |
---|---|
SelectTableSink |
Planner.createSelectTableSink(TableSchema tableSchema)
Creates a
SelectTableSink for a select query. |
ResolvedExpression |
Parser.parseSqlExpression(String sqlExpression,
TableSchema inputSchema)
Entry point for parsing SQL expressions expressed as a String.
|
Modifier and Type | Method and Description |
---|---|
static TableSchema |
SchemaValidator.deriveTableSinkSchema(DescriptorProperties properties)
Deprecated.
This method combines two separate concepts of table schema and field mapping.
This should be split into two methods once we have support for the corresponding
interfaces (see FLINK-9870).
|
TableSchema |
DescriptorProperties.getTableSchema(String key)
Returns a table schema under the given existing key.
|
Modifier and Type | Method and Description |
---|---|
Optional<TableSchema> |
DescriptorProperties.getOptionalTableSchema(String key)
Returns a table schema under the given key if it exists.
|
Modifier and Type | Method and Description |
---|---|
void |
DescriptorProperties.putTableSchema(String key,
TableSchema schema)
Adds a table schema under the given key.
|
OldCsv |
OldCsv.schema(TableSchema schema)
Deprecated.
OldCsv supports derive schema from table schema by default, it is no
longer necessary to explicitly declare the format schema. This method will be removed in
the future. |
Schema |
Schema.schema(TableSchema schema)
Sets the schema with field names and the types.
|
Modifier and Type | Method and Description |
---|---|
static TableSchema |
TableFormatFactoryBase.deriveSchema(Map<String,String> properties)
Finds the table schema that can be used for a format schema (without time attributes and
generated columns).
|
TableSchema |
FileSystemFormatFactory.ReaderContext.getSchema()
Full schema of the table.
|
TableSchema |
FileSystemFormatFactory.WriterContext.getSchema()
Full schema of the table.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
FileSystemTableSink.getTableSchema() |
TableSchema |
FileSystemTableSource.getTableSchema() |
Constructor and Description |
---|
FileSystemTableSink(ObjectIdentifier tableIdentifier,
boolean isBounded,
TableSchema schema,
Path path,
List<String> partitionKeys,
String defaultPartName,
Map<String,String> properties)
Construct a file system table sink.
|
FileSystemTableSource(TableSchema schema,
Path path,
List<String> partitionKeys,
String defaultPartName,
Map<String,String> properties)
Construct a file system table source.
|
Modifier and Type | Method and Description |
---|---|
TableSchema |
ScalaDataStreamQueryOperation.getTableSchema() |
TableSchema |
DataSetQueryOperation.getTableSchema() |
TableSchema |
JavaDataStreamQueryOperation.getTableSchema() |
TableSchema |
CatalogQueryOperation.getTableSchema() |
TableSchema |
ProjectQueryOperation.getTableSchema() |
TableSchema |
SetQueryOperation.getTableSchema() |
TableSchema |
QueryOperation.getTableSchema()
Resolved schema of this operation.
|
TableSchema |
JoinQueryOperation.getTableSchema() |
TableSchema |
FilterQueryOperation.getTableSchema() |
TableSchema |
ValuesQueryOperation.getTableSchema() |
TableSchema |
CalculatedQueryOperation.getTableSchema() |
TableSchema |
DistinctQueryOperation.getTableSchema() |
TableSchema |
TableSourceQueryOperation.getTableSchema() |
TableSchema |
SortQueryOperation.getTableSchema() |
TableSchema |
AggregateQueryOperation.getTableSchema() |
TableSchema |
WindowAggregateQueryOperation.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
ResolvedExpression |
ParserImpl.parseSqlExpression(String sqlExpression,
TableSchema inputSchema) |
Modifier and Type | Method and Description |
---|---|
ResolvedExpression |
ParserImpl.parseSqlExpression(String sqlExpression,
TableSchema inputSchema) |
Constructor and Description |
---|
ParserImpl(CatalogManager catalogManager,
java.util.function.Supplier<org.apache.flink.table.planner.calcite.FlinkPlannerImpl> validatorSupplier,
java.util.function.Supplier<CalciteParser> calciteParserSupplier,
java.util.function.Function<TableSchema,SqlExprToRexConverter> sqlExprToRexConverterCreator) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
PlannerQueryOperation.getTableSchema() |
TableSchema |
DataStreamQueryOperation.getTableSchema() |
Constructor and Description |
---|
DataStreamQueryOperation(ObjectIdentifier identifier,
DataStream<E> dataStream,
int[] fieldIndices,
TableSchema tableSchema,
boolean[] fieldNullables,
org.apache.flink.table.planner.plan.stats.FlinkStatistic statistic) |
Modifier and Type | Method and Description |
---|---|
static TableSchema |
SelectTableSinkSchemaConverter.changeDefaultConversionClass(TableSchema tableSchema)
Change to default conversion class and build a new
TableSchema . |
static TableSchema |
SelectTableSinkSchemaConverter.convertTimeAttributeToRegularTimestamp(TableSchema tableSchema)
Convert time attributes (proc time / event time) to regular timestamp and build a new
TableSchema . |
TableSchema |
SelectTableSinkBase.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
static TableSchema |
SelectTableSinkSchemaConverter.changeDefaultConversionClass(TableSchema tableSchema)
Change to default conversion class and build a new
TableSchema . |
static TableSchema |
SelectTableSinkSchemaConverter.convertTimeAttributeToRegularTimestamp(TableSchema tableSchema)
Convert time attributes (proc time / event time) to regular timestamp and build a new
TableSchema . |
Constructor and Description |
---|
BatchSelectTableSink(TableSchema tableSchema) |
SelectTableSinkBase(TableSchema tableSchema) |
StreamSelectTableSink(TableSchema tableSchema) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
AbstractArrowTableSource.getTableSchema() |
Modifier and Type | Method and Description |
---|---|
TableSchema |
BatchSelectTableSink.getTableSchema() |
TableSchema |
StreamSelectTableSink.getTableSchema() |
TableSchema |
CsvTableSink.getTableSchema() |
default TableSchema |
TableSink.getTableSchema()
Returns the schema of the consumed table.
|
Constructor and Description |
---|
BatchSelectTableSink(TableSchema tableSchema) |
StreamSelectTableSink(TableSchema tableSchema) |
Modifier and Type | Method and Description |
---|---|
TableSchema |
CsvTableSource.getTableSchema() |
TableSchema |
TableSource.getTableSchema()
Deprecated.
Table schema is a logical description of a table and should not be part of the
physical TableSource. Define schema when registering a Table either in DDL or in
TableEnvironment#connect(...) . |
Modifier and Type | Method and Description |
---|---|
static List<LogicalType> |
CsvTableSourceFactoryBase.getFieldLogicalTypes(TableSchema schema) |
static void |
TableSourceValidation.validateTableSource(TableSource<?> tableSource,
TableSchema schema)
Validates a TableSource.
|
Modifier and Type | Method and Description |
---|---|
static TableSchema |
DataTypeUtils.expandCompositeTypeToSchema(DataType dataType)
Expands a composite
DataType to a corresponding TableSchema . |
Modifier and Type | Method and Description |
---|---|
TableSchema |
FieldInfoUtils.TypeInfoSchema.toTableSchema() |
Modifier and Type | Method and Description |
---|---|
static TableSchema |
TableSchemaUtils.checkNoGeneratedColumns(TableSchema schema)
Throws exception if the given
TableSchema contains any generated columns. |
static TableSchema |
TableSchemaUtils.dropConstraint(TableSchema oriSchema,
String constraintName)
Creates a new schema but drop the constraint with given name.
|
static TableSchema |
TableSchemaUtils.getPhysicalSchema(TableSchema tableSchema)
Return
TableSchema which consists of all physical columns. |
static TableSchema |
TableSchemaUtils.projectSchema(TableSchema tableSchema,
int[][] projectedFields)
Creates a new
TableSchema with the projected fields from another TableSchema . |
Modifier and Type | Method and Description |
---|---|
static TableSchema.Builder |
TableSchemaUtils.builderWithGivenSchema(TableSchema oriSchema)
Creates a builder with given table schema.
|
static TableSchema |
TableSchemaUtils.checkNoGeneratedColumns(TableSchema schema)
Throws exception if the given
TableSchema contains any generated columns. |
static boolean |
TableSchemaUtils.containsGeneratedColumns(TableSchema schema)
Returns true if there are any generated columns in the given
TableColumn . |
static TableSchema |
TableSchemaUtils.dropConstraint(TableSchema oriSchema,
String constraintName)
Creates a new schema but drop the constraint with given name.
|
static TableSchema |
TableSchemaUtils.getPhysicalSchema(TableSchema tableSchema)
Return
TableSchema which consists of all physical columns. |
static int[] |
TableSchemaUtils.getPrimaryKeyIndices(TableSchema schema)
Returns the field indices of primary key in the physical columns of this schema (not include
computed columns).
|
static void |
PrintUtils.printAsTableauForm(TableSchema tableSchema,
Iterator<Row> it,
PrintWriter printWriter)
Displays the result in a tableau form.
|
static void |
PrintUtils.printAsTableauForm(TableSchema tableSchema,
Iterator<Row> it,
PrintWriter printWriter,
int maxColumnWidth,
String nullColumn,
boolean deriveColumnWidthByType)
Displays the result in a tableau form.
|
static TableSchema |
TableSchemaUtils.projectSchema(TableSchema tableSchema,
int[][] projectedFields)
Creates a new
TableSchema with the projected fields from another TableSchema . |
Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.