Modifier and Type | Method and Description |
---|---|
DataGeneratorContainer |
RandomGeneratorVisitor.visit(RowType rowType) |
Modifier and Type | Field and Description |
---|---|
protected RowType |
AbstractJdbcRowConverter.rowType |
Constructor and Description |
---|
AbstractJdbcRowConverter(RowType rowType) |
Modifier and Type | Method and Description |
---|---|
JdbcRowConverter |
JdbcDialect.getRowConverter(RowType rowType)
Get converter that convert jdbc object and Flink internal object each other.
|
void |
AbstractDialect.validate(RowType rowType) |
void |
JdbcDialect.validate(RowType rowType)
Check if this dialect instance support a specific data type in table schema.
|
Modifier and Type | Method and Description |
---|---|
JdbcRowConverter |
MySqlDialect.getRowConverter(RowType rowType) |
Modifier and Type | Method and Description |
---|---|
JdbcRowConverter |
PostgresDialect.getRowConverter(RowType rowType) |
Constructor and Description |
---|
DerbyRowConverter(RowType rowType) |
MySQLRowConverter(RowType rowType) |
OracleRowConverter(RowType rowType) |
PostgresRowConverter(RowType rowType) |
Constructor and Description |
---|
JdbcRowDataLookupFunction(JdbcConnectorOptions options,
int maxRetryTimes,
String[] fieldNames,
DataType[] fieldTypes,
String[] keyNames,
RowType rowType) |
Modifier and Type | Method and Description |
---|---|
static PartitionKeyGenerator<RowData> |
KinesisPartitionKeyGeneratorFactory.getKinesisPartitioner(ReadableConfig tableOptions,
RowType physicalType,
List<String> partitionKeys,
ClassLoader classLoader)
Constructs the kinesis partitioner for a
targetTable based on the currently set
tableOptions . |
Constructor and Description |
---|
RowDataFieldsKinesisPartitionKeyGenerator(RowType physicalType,
List<String> partitionKeys) |
RowDataFieldsKinesisPartitionKeyGenerator(RowType physicalType,
List<String> partitionKeys,
String delimiter) |
Constructor and Description |
---|
KinesisStreamsConnectorOptionsUtils(Map<String,String> options,
ReadableConfig tableOptions,
RowType physicalType,
List<String> partitionKeys,
ClassLoader classLoader) |
Constructor and Description |
---|
FileSystemLookupFunction(PartitionFetcher<P> partitionFetcher,
PartitionFetcher.Context<P> fetcherContext,
PartitionReader<P,RowData> partitionReader,
RowType rowType,
int[] lookupKeys,
java.time.Duration reloadInterval) |
Constructor and Description |
---|
HiveBulkFormatAdapter(JobConfWrapper jobConfWrapper,
List<String> partitionKeys,
String[] fieldNames,
DataType[] fieldTypes,
String hiveVersion,
RowType producedRowType,
boolean useMapRedReader)
Deprecated.
|
HiveCompactReaderFactory(org.apache.hadoop.hive.metastore.api.StorageDescriptor sd,
Properties properties,
org.apache.hadoop.mapred.JobConf jobConf,
CatalogTable catalogTable,
String hiveVersion,
RowType producedRowType,
boolean useMapRedReader) |
HiveInputFormat(JobConfWrapper jobConfWrapper,
List<String> partitionKeys,
String[] fieldNames,
DataType[] fieldTypes,
String hiveVersion,
RowType producedRowType,
boolean useMapRedReader) |
Modifier and Type | Method and Description |
---|---|
static AvroToRowDataConverters.AvroToRowDataConverter |
AvroToRowDataConverters.createRowConverter(RowType rowType) |
Constructor and Description |
---|
AvroRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> typeInfo)
Creates a Avro deserialization schema for the given logical type.
|
AvroRowDataSerializationSchema(RowType rowType)
Creates an Avro serialization schema with the given record row type.
|
AvroRowDataSerializationSchema(RowType rowType,
SerializationSchema<org.apache.avro.generic.GenericRecord> nestedSchema,
RowDataToAvroConverters.RowDataToAvroConverter runtimeConverter)
Creates an Avro serialization schema with the given record row type, nested schema and
runtime converters.
|
Modifier and Type | Method and Description |
---|---|
static RowType |
DebeziumAvroSerializationSchema.createDebeziumAvroRowType(DataType dataType) |
static RowType |
DebeziumAvroDeserializationSchema.createDebeziumAvroRowType(DataType databaseSchema) |
Constructor and Description |
---|
DebeziumAvroDeserializationSchema(RowType rowType,
TypeInformation<RowData> producedTypeInfo,
String schemaRegistryUrl,
Map<String,?> registryConfigs) |
DebeziumAvroSerializationSchema(RowType rowType,
String schemaRegistryUrl,
String schemaRegistrySubject,
Map<String,?> registryConfigs) |
Modifier and Type | Method and Description |
---|---|
static org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema |
CsvRowSchemaConverter.convert(RowType rowType)
Convert
RowType to CsvSchema . |
static org.apache.flink.formats.csv.RowDataToCsvConverters.RowDataToCsvConverter |
RowDataToCsvConverters.createRowConverter(RowType type) |
org.apache.flink.formats.csv.CsvToRowDataConverters.CsvToRowDataConverter |
CsvToRowDataConverters.createRowConverter(RowType rowType,
boolean isTopLevel) |
Constructor and Description |
---|
Builder(RowType rowType)
Creates a
CsvRowDataSerializationSchema expecting the given RowType . |
Builder(RowType rowReadType,
RowType rowResultType,
TypeInformation<RowData> resultTypeInfo)
Creates a CSV deserialization schema for the given
TypeInformation with optional
parameters. |
Builder(RowType rowType,
TypeInformation<RowData> resultTypeInfo)
Creates a CSV deserialization schema for the given
TypeInformation with optional
parameters. |
Modifier and Type | Method and Description |
---|---|
JsonToRowDataConverters.JsonToRowDataConverter |
JsonToRowDataConverters.createRowConverter(RowType rowType) |
Constructor and Description |
---|
JsonRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> resultTypeInfo,
boolean failOnMissingField,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
JsonRowDataSerializationSchema(RowType rowType,
TimestampFormat timestampFormat,
JsonFormatOptions.MapNullKeyMode mapNullKeyMode,
String mapNullKeyLiteral,
boolean encodeDecimalAsPlainNumber) |
Constructor and Description |
---|
CanalJsonSerializationSchema(RowType rowType,
TimestampFormat timestampFormat,
JsonFormatOptions.MapNullKeyMode mapNullKeyMode,
String mapNullKeyLiteral,
boolean encodeDecimalAsPlainNumber) |
Constructor and Description |
---|
DebeziumJsonSerializationSchema(RowType rowType,
TimestampFormat timestampFormat,
JsonFormatOptions.MapNullKeyMode mapNullKeyMode,
String mapNullKeyLiteral,
boolean encodeDecimalAsPlainNumber) |
Constructor and Description |
---|
MaxwellJsonSerializationSchema(RowType rowType,
TimestampFormat timestampFormat,
JsonFormatOptions.MapNullKeyMode mapNullKeyMode,
String mapNullKeyLiteral,
boolean encodeDecimalAsPlainNumber) |
Constructor and Description |
---|
OggJsonSerializationSchema(RowType rowType,
TimestampFormat timestampFormat,
JsonFormatOptions.MapNullKeyMode mapNullKeyMode,
String mapNullKeyLiteral,
boolean encodeDecimalAsPlainNumber) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
ParquetColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType producedRowType,
TypeInformation<RowData> producedTypeInfo,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Create a partitioned
ParquetColumnarRowInputFormat , the partition columns can be
generated by Path . |
Constructor and Description |
---|
ParquetColumnarRowInputFormat(Configuration hadoopConfig,
RowType projectedType,
TypeInformation<RowData> producedTypeInfo,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Constructor to create parquet format without extra fields.
|
ParquetVectorizedInputFormat(SerializableConfiguration hadoopConfig,
RowType projectedType,
ColumnBatchFactory<SplitT> batchFactory,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive) |
Modifier and Type | Method and Description |
---|---|
static ParquetWriterFactory<RowData> |
ParquetRowDataBuilder.createWriterFactory(RowType rowType,
Configuration conf,
boolean utcTimestamp)
Create a parquet
BulkWriter.Factory . |
Constructor and Description |
---|
FlinkParquetBuilder(RowType rowType,
Configuration conf,
boolean utcTimestamp) |
ParquetRowDataBuilder(org.apache.parquet.io.OutputFile path,
RowType rowType,
boolean utcTimestamp) |
ParquetRowDataWriter(org.apache.parquet.io.api.RecordConsumer recordConsumer,
RowType rowType,
org.apache.parquet.schema.GroupType schema,
boolean utcTimestamp) |
Modifier and Type | Method and Description |
---|---|
static org.apache.parquet.schema.MessageType |
ParquetSchemaConverter.convertToParquetMessageType(String name,
RowType rowType) |
Modifier and Type | Method and Description |
---|---|
static PbCodegenDeserializer |
PbCodegenDeserializeFactory.getPbCodegenTopRowDes(com.google.protobuf.Descriptors.Descriptor descriptor,
RowType rowType,
PbFormatContext formatContext) |
Constructor and Description |
---|
PbCodegenRowDeserializer(com.google.protobuf.Descriptors.Descriptor descriptor,
RowType rowType,
PbFormatContext formatContext) |
PbRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> resultTypeInfo,
PbFormatConfig formatConfig) |
ProtoToRowConverter(RowType rowType,
PbFormatConfig formatConfig) |
Modifier and Type | Method and Description |
---|---|
static PbCodegenSerializer |
PbCodegenSerializeFactory.getPbCodegenTopRowSer(com.google.protobuf.Descriptors.Descriptor descriptor,
RowType rowType,
PbFormatContext formatContext) |
Constructor and Description |
---|
PbCodegenRowSerializer(com.google.protobuf.Descriptors.Descriptor descriptor,
RowType rowType,
PbFormatContext formatContext) |
PbRowDataSerializationSchema(RowType rowType,
PbFormatConfig pbFormatConfig) |
RowToProtoConverter(RowType rowType,
PbFormatConfig formatConfig) |
Modifier and Type | Method and Description |
---|---|
static RowType |
PbToRowTypeUtil.generateRowType(com.google.protobuf.Descriptors.Descriptor root) |
static RowType |
PbToRowTypeUtil.generateRowType(com.google.protobuf.Descriptors.Descriptor root,
boolean enumAsInt) |
Modifier and Type | Method and Description |
---|---|
static void |
PbSchemaValidationUtils.validate(com.google.protobuf.Descriptors.Descriptor descriptor,
RowType rowType) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcColumnarRowInputFormat.createPartitionedFormat(OrcShim<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> shim,
Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
java.util.function.Function<RowType,TypeInformation<RowData>> rowTypeInfoFactory)
Create a partitioned
OrcColumnarRowInputFormat , the partition columns can be
generated by split. |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcColumnarRowInputFormat.createPartitionedFormat(OrcShim<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> shim,
Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
java.util.function.Function<RowType,TypeInformation<RowData>> rowTypeInfoFactory)
Create a partitioned
OrcColumnarRowInputFormat , the partition columns can be
generated by split. |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcNoHiveColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
java.util.function.Function<RowType,TypeInformation<RowData>> rowTypeInfoFactory)
Create a partitioned
OrcColumnarRowInputFormat , the partition columns can be
generated by split. |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcNoHiveColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
java.util.function.Function<RowType,TypeInformation<RowData>> rowTypeInfoFactory)
Create a partitioned
OrcColumnarRowInputFormat , the partition columns can be
generated by split. |
Constructor and Description |
---|
OrcRowColumnVector(org.apache.hadoop.hive.ql.exec.vector.StructColumnVector hiveVector,
RowType type) |
Modifier and Type | Method and Description |
---|---|
static FlinkFnApi.CoderInfoDescriptor |
ProtoUtils.createArrowTypeCoderInfoDescriptorProto(RowType rowType,
FlinkFnApi.CoderInfoDescriptor.Mode mode,
boolean separatedWithEndMessage) |
static FlinkFnApi.CoderInfoDescriptor |
ProtoUtils.createFlattenRowTypeCoderInfoDescriptorProto(RowType rowType,
FlinkFnApi.CoderInfoDescriptor.Mode mode,
boolean separatedWithEndMessage) |
static FlinkFnApi.CoderInfoDescriptor |
ProtoUtils.createOverWindowArrowTypeCoderInfoDescriptorProto(RowType rowType,
FlinkFnApi.CoderInfoDescriptor.Mode mode,
boolean separatedWithEndMessage) |
static FlinkFnApi.CoderInfoDescriptor |
ProtoUtils.createRowTypeCoderInfoDescriptorProto(RowType rowType,
FlinkFnApi.CoderInfoDescriptor.Mode mode,
boolean separatedWithEndMessage) |
Modifier and Type | Method and Description |
---|---|
RowType |
DynamicFilteringData.getRowType() |
Constructor and Description |
---|
DynamicFilteringData(TypeInformation<RowData> typeInfo,
RowType rowType,
List<byte[]> serializedData,
boolean isFiltering) |
Modifier and Type | Method and Description |
---|---|
ResolvedExpression |
Parser.parseSqlExpression(String sqlExpression,
RowType inputRowType,
LogicalType outputType)
Entry point for parsing SQL expressions expressed as a String.
|
Modifier and Type | Method and Description |
---|---|
ResolvedExpression |
SqlExpressionResolver.resolveExpression(String sqlExpression,
RowType inputRowType,
LogicalType outputType)
Translates the given SQL expression string or throws a
ValidationException . |
Modifier and Type | Method and Description |
---|---|
SqlToRexConverter |
RexFactory.createSqlToRexConverter(RowType inputRowType,
LogicalType outputType)
Creates a new instance of
SqlToRexConverter to convert SQL expression to RexNode . |
Modifier and Type | Method and Description |
---|---|
static RowType |
DynamicSourceUtils.createProducedType(ResolvedSchema schema,
DynamicTableSource source)
Returns the
DataType that a source should produce as the input into the runtime. |
Modifier and Type | Method and Description |
---|---|
ResolvedExpression |
ParserImpl.parseSqlExpression(String sqlExpression,
RowType inputRowType,
LogicalType outputType) |
Modifier and Type | Method and Description |
---|---|
RowType |
SourceAbilityContext.getSourceRowType() |
Modifier and Type | Method and Description |
---|---|
Optional<RowType> |
SourceAbilitySpecBase.getProducedType() |
Optional<RowType> |
SourceAbilitySpec.getProducedType()
Return the produced
RowType this the ability is applied. |
Modifier and Type | Method and Description |
---|---|
static boolean |
AggregatePushDownSpec.apply(RowType inputType,
List<int[]> groupingSets,
List<org.apache.calcite.rel.core.AggregateCall> aggregateCalls,
RowType producedType,
DynamicTableSource tableSource,
SourceAbilityContext context) |
Constructor and Description |
---|
AggregatePushDownSpec(RowType inputType,
List<int[]> groupingSets,
List<org.apache.calcite.rel.core.AggregateCall> aggregateCalls,
RowType producedType) |
ProjectPushDownSpec(int[][] projectedFields,
RowType producedType) |
ReadingMetadataSpec(List<String> metadataKeys,
RowType producedType) |
SourceAbilityContext(org.apache.flink.table.planner.calcite.FlinkContext context,
org.apache.flink.table.planner.calcite.FlinkTypeFactory typeFactory,
RowType sourceRowType) |
SourceAbilitySpecBase(RowType producedType) |
SourceWatermarkSpec(boolean sourceWatermarkEnabled,
RowType producedType) |
WatermarkPushDownSpec(org.apache.calcite.rex.RexNode watermarkExpr,
long idleTimeoutMillis,
RowType producedType) |
Modifier and Type | Method and Description |
---|---|
protected RowType |
BatchExecLegacySink.checkAndConvertInputTypeIfNeeded(RowType inputRowType) |
protected RowType |
BatchExecOverAggregateBase.getInputTypeWithConstants() |
Modifier and Type | Method and Description |
---|---|
protected RowType |
BatchExecLegacySink.checkAndConvertInputTypeIfNeeded(RowType inputRowType) |
boolean |
BatchExecMatch.isProcTime(RowType inputRowType) |
Constructor and Description |
---|
BatchExecBoundedStreamScan(ReadableConfig tableConfig,
DataStream<?> dataStream,
DataType sourceType,
int[] fieldIndexes,
List<String> qualifiedName,
RowType outputType,
String description) |
BatchExecCalc(ReadableConfig tableConfig,
List<org.apache.calcite.rex.RexNode> projection,
org.apache.calcite.rex.RexNode condition,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecCorrelate(ReadableConfig tableConfig,
FlinkJoinType joinType,
org.apache.calcite.rex.RexCall invocation,
org.apache.calcite.rex.RexNode condition,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecDynamicFilteringDataCollector(List<Integer> dynamicFilteringFieldIndices,
ReadableConfig tableConfig,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecExchange(ReadableConfig tableConfig,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecExpand(ReadableConfig tableConfig,
List<List<org.apache.calcite.rex.RexNode>> projects,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecHashAggregate(ReadableConfig tableConfig,
int[] grouping,
int[] auxGrouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
RowType aggInputRowType,
boolean isMerge,
boolean isFinal,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecHashJoin(ReadableConfig tableConfig,
JoinSpec joinSpec,
int estimatedLeftAvgRowSize,
int estimatedRightAvgRowSize,
long estimatedLeftRowCount,
long estimatedRightRowCount,
boolean leftIsBuild,
boolean tryDistinctBuildRow,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
BatchExecHashWindowAggregate(ReadableConfig tableConfig,
int[] grouping,
int[] auxGrouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
int inputTimeFieldIndex,
boolean inputTimeIsDate,
NamedWindowProperty[] namedWindowProperties,
RowType aggInputRowType,
boolean enableAssignPane,
boolean isMerge,
boolean isFinal,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecLegacyTableSourceScan(ReadableConfig tableConfig,
TableSource<?> tableSource,
List<String> qualifiedName,
RowType outputType,
String description) |
BatchExecLookupJoin(ReadableConfig tableConfig,
FlinkJoinType joinType,
org.apache.calcite.rex.RexNode joinCondition,
TemporalTableSourceSpec temporalTableSourceSpec,
Map<Integer,LookupJoinUtil.LookupKey> lookupKeys,
List<org.apache.calcite.rex.RexNode> projectionOnTemporalTable,
org.apache.calcite.rex.RexNode filterOnTemporalTable,
LookupJoinUtil.AsyncLookupOptions asyncLookupOptions,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecMatch(ReadableConfig tableConfig,
MatchSpec matchSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecNestedLoopJoin(ReadableConfig tableConfig,
FlinkJoinType joinType,
org.apache.calcite.rex.RexNode condition,
boolean leftIsBuild,
boolean singleRowJoin,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
BatchExecOverAggregate(ReadableConfig tableConfig,
OverSpec overSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecOverAggregateBase(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
OverSpec overSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecPythonCalc(ReadableConfig tableConfig,
List<org.apache.calcite.rex.RexNode> projection,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecPythonCorrelate(ReadableConfig tableConfig,
FlinkJoinType joinType,
org.apache.calcite.rex.RexCall invocation,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecPythonGroupAggregate(ReadableConfig tableConfig,
int[] grouping,
int[] auxGrouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecPythonGroupWindowAggregate(ReadableConfig tableConfig,
int[] grouping,
int[] auxGrouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
int inputTimeFieldIndex,
NamedWindowProperty[] namedWindowProperties,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecPythonOverAggregate(ReadableConfig tableConfig,
OverSpec overSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecRank(ReadableConfig tableConfig,
int[] partitionFields,
int[] sortFields,
long rankStart,
long rankEnd,
boolean outputRankNumber,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecSort(ReadableConfig tableConfig,
SortSpec sortSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecSortAggregate(ReadableConfig tableConfig,
int[] grouping,
int[] auxGrouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
RowType aggInputRowType,
boolean isMerge,
boolean isFinal,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecSortLimit(ReadableConfig tableConfig,
SortSpec sortSpec,
long limitStart,
long limitEnd,
boolean isGlobal,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecSortMergeJoin(ReadableConfig tableConfig,
FlinkJoinType joinType,
int[] leftKeys,
int[] rightKeys,
boolean[] filterNulls,
org.apache.calcite.rex.RexNode nonEquiCondition,
boolean leftIsSmaller,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
BatchExecSortWindowAggregate(ReadableConfig tableConfig,
int[] grouping,
int[] auxGrouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
int inputTimeFieldIndex,
boolean inputTimeIsDate,
NamedWindowProperty[] namedWindowProperties,
RowType aggInputRowType,
boolean enableAssignPane,
boolean isMerge,
boolean isFinal,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecTableSourceScan(ReadableConfig tableConfig,
DynamicTableSourceSpec tableSourceSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecTableSourceScan(ReadableConfig tableConfig,
DynamicTableSourceSpec tableSourceSpec,
RowType outputType,
String description) |
BatchExecUnion(ReadableConfig tableConfig,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
BatchExecValues(ReadableConfig tableConfig,
List<List<RexLiteral>> tuples,
RowType outputType,
String description) |
BatchExecWindowTableFunction(ReadableConfig tableConfig,
TimeAttributeWindowingStrategy windowingStrategy,
InputProperty inputProperty,
RowType outputType,
String description) |
Modifier and Type | Method and Description |
---|---|
protected abstract RowType |
CommonExecLegacySink.checkAndConvertInputTypeIfNeeded(RowType inputRowType)
Check whether the given row type is legal and do some conversion if needed.
|
Modifier and Type | Method and Description |
---|---|
protected abstract RowType |
CommonExecLegacySink.checkAndConvertInputTypeIfNeeded(RowType inputRowType)
Check whether the given row type is legal and do some conversion if needed.
|
protected void |
CommonExecMatch.checkOrderKeys(RowType inputRowType) |
abstract boolean |
CommonExecMatch.isProcTime(RowType inputRowType) |
protected Transformation<RowData> |
CommonExecMatch.translateOrder(Transformation<RowData> inputTransform,
RowType inputRowType,
ExecNodeConfig config) |
static Tuple2<Pattern<RowData,RowData>,List<String>> |
CommonExecMatch.translatePattern(MatchSpec matchSpec,
ReadableConfig config,
ClassLoader classLoader,
org.apache.calcite.tools.RelBuilder relBuilder,
RowType inputRowType) |
protected void |
CommonExecLookupJoin.validateLookupKeyType(Map<Integer,LookupJoinUtil.LookupKey> lookupKeys,
RowType inputRowType,
RowType tableSourceRowType) |
Constructor and Description |
---|
CommonExecCalc(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
List<org.apache.calcite.rex.RexNode> projection,
org.apache.calcite.rex.RexNode condition,
Class<?> operatorBaseClass,
boolean retainHeader,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecCorrelate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
FlinkJoinType joinType,
org.apache.calcite.rex.RexCall invocation,
org.apache.calcite.rex.RexNode condition,
Class<?> operatorBaseClass,
boolean retainHeader,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecExchange(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecExpand(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
List<List<org.apache.calcite.rex.RexNode>> projects,
boolean retainHeader,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecLegacyTableSourceScan(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
TableSource<?> tableSource,
List<String> qualifiedName,
RowType outputType,
String description) |
CommonExecLookupJoin(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
FlinkJoinType joinType,
org.apache.calcite.rex.RexNode joinCondition,
TemporalTableSourceSpec temporalTableSourceSpec,
Map<Integer,LookupJoinUtil.LookupKey> lookupKeys,
List<org.apache.calcite.rex.RexNode> projectionOnTemporalTable,
org.apache.calcite.rex.RexNode filterOnTemporalTable,
LookupJoinUtil.AsyncLookupOptions asyncLookupOptions,
LookupJoinUtil.RetryLookupOptions retryOptions,
ChangelogMode inputChangelogMode,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecPythonCalc(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
List<org.apache.calcite.rex.RexNode> projection,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecPythonCorrelate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
FlinkJoinType joinType,
org.apache.calcite.rex.RexCall invocation,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecUnion(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecValues(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
List<List<RexLiteral>> tuples,
RowType outputType,
String description) |
CommonExecWindowTableFunction(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
TimeAttributeWindowingStrategy windowingStrategy,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
Modifier and Type | Method and Description |
---|---|
LogicalType[] |
SortSpec.getFieldTypes(RowType input)
Gets fields types of sort fields accoording to input type.
|
Modifier and Type | Method and Description |
---|---|
protected RowType |
StreamExecLegacySink.checkAndConvertInputTypeIfNeeded(RowType inputRowType) |
Modifier and Type | Method and Description |
---|---|
protected RowType |
StreamExecLegacySink.checkAndConvertInputTypeIfNeeded(RowType inputRowType) |
void |
StreamExecMatch.checkOrderKeys(RowType inputRowType) |
boolean |
StreamExecMatch.isProcTime(RowType inputRowType) |
Transformation<RowData> |
StreamExecMatch.translateOrder(Transformation<RowData> inputTransform,
RowType inputRowType,
ExecNodeConfig config) |
Constructor and Description |
---|
StreamExecCalc(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
List<org.apache.calcite.rex.RexNode> projection,
org.apache.calcite.rex.RexNode condition,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecCalc(ReadableConfig tableConfig,
List<org.apache.calcite.rex.RexNode> projection,
org.apache.calcite.rex.RexNode condition,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecChangelogNormalize(Integer id,
ExecNodeContext context,
ReadableConfig persistedConfig,
int[] uniqueKeys,
boolean generateUpdateBefore,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecChangelogNormalize(ReadableConfig tableConfig,
int[] uniqueKeys,
boolean generateUpdateBefore,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecCorrelate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
FlinkJoinType joinType,
org.apache.calcite.rex.RexNode invocation,
org.apache.calcite.rex.RexNode condition,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecCorrelate(ReadableConfig tableConfig,
FlinkJoinType joinType,
org.apache.calcite.rex.RexCall invocation,
org.apache.calcite.rex.RexNode condition,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecDataStreamScan(ReadableConfig tableConfig,
DataStream<?> dataStream,
DataType sourceType,
int[] fieldIndexes,
String[] fieldNames,
List<String> qualifiedName,
RowType outputType,
String description) |
StreamExecDeduplicate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
int[] uniqueKeys,
boolean isRowtime,
boolean keepLastRow,
boolean generateUpdateBefore,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecDeduplicate(ReadableConfig tableConfig,
int[] uniqueKeys,
boolean isRowtime,
boolean keepLastRow,
boolean generateUpdateBefore,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecDropUpdateBefore(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecDropUpdateBefore(ReadableConfig tableConfig,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecExchange(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecExchange(ReadableConfig tableConfig,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecExpand(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
List<List<org.apache.calcite.rex.RexNode>> projects,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecExpand(ReadableConfig tableConfig,
List<List<org.apache.calcite.rex.RexNode>> projects,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecGlobalGroupAggregate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
RowType localAggInputRowType,
boolean generateUpdateBefore,
boolean needRetraction,
Integer indexOfCountStar,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecGlobalGroupAggregate(ReadableConfig tableConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
RowType localAggInputRowType,
boolean generateUpdateBefore,
boolean needRetraction,
Integer indexOfCountStar,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecGlobalWindowAggregate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
WindowingStrategy windowing,
NamedWindowProperty[] namedWindowProperties,
List<InputProperty> inputProperties,
RowType localAggInputRowType,
RowType outputType,
String description) |
StreamExecGlobalWindowAggregate(ReadableConfig tableConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
WindowingStrategy windowing,
NamedWindowProperty[] namedWindowProperties,
InputProperty inputProperty,
RowType localAggInputRowType,
RowType outputType,
String description) |
StreamExecGroupAggregate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean generateUpdateBefore,
boolean needRetraction,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecGroupAggregate(ReadableConfig tableConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean generateUpdateBefore,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecGroupTableAggregate(ReadableConfig tableConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean generateUpdateBefore,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecGroupWindowAggregate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
NamedWindowProperty[] namedWindowProperties,
boolean needRetraction,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecGroupWindowAggregate(ReadableConfig tableConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
NamedWindowProperty[] namedWindowProperties,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecIncrementalGroupAggregate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
int[] partialAggGrouping,
int[] finalAggGrouping,
org.apache.calcite.rel.core.AggregateCall[] partialOriginalAggCalls,
boolean[] partialAggCallNeedRetractions,
RowType partialLocalAggInputType,
boolean partialAggNeedRetraction,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecIncrementalGroupAggregate(ReadableConfig tableConfig,
int[] partialAggGrouping,
int[] finalAggGrouping,
org.apache.calcite.rel.core.AggregateCall[] partialOriginalAggCalls,
boolean[] partialAggCallNeedRetractions,
RowType partialLocalAggInputType,
boolean partialAggNeedRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecIntervalJoin(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
IntervalJoinSpec intervalJoinSpec,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecIntervalJoin(ReadableConfig tableConfig,
IntervalJoinSpec intervalJoinSpec,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
StreamExecJoin(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
JoinSpec joinSpec,
List<int[]> leftUpsertKeys,
List<int[]> rightUpsertKeys,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecJoin(ReadableConfig tableConfig,
JoinSpec joinSpec,
List<int[]> leftUpsertKeys,
List<int[]> rightUpsertKeys,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
StreamExecLegacyTableSourceScan(ReadableConfig tableConfig,
TableSource<?> tableSource,
List<String> qualifiedName,
RowType outputType,
String description) |
StreamExecLimit(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
ConstantRankRange rankRange,
RankProcessStrategy rankStrategy,
boolean generateUpdateBefore,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecLimit(ReadableConfig tableConfig,
long limitStart,
long limitEnd,
boolean generateUpdateBefore,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecLocalGroupAggregate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean needRetraction,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecLocalGroupAggregate(ReadableConfig tableConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecLocalWindowAggregate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
WindowingStrategy windowing,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecLocalWindowAggregate(ReadableConfig tableConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
WindowingStrategy windowing,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecLookupJoin(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
FlinkJoinType joinType,
org.apache.calcite.rex.RexNode joinCondition,
TemporalTableSourceSpec temporalTableSourceSpec,
Map<Integer,LookupJoinUtil.LookupKey> lookupKeys,
List<org.apache.calcite.rex.RexNode> projectionOnTemporalTable,
org.apache.calcite.rex.RexNode filterOnTemporalTable,
boolean lookupKeyContainsPrimaryKey,
boolean upsertMaterialize,
LookupJoinUtil.AsyncLookupOptions asyncLookupOptions,
LookupJoinUtil.RetryLookupOptions retryOptions,
ChangelogMode inputChangelogMode,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecLookupJoin(ReadableConfig tableConfig,
FlinkJoinType joinType,
org.apache.calcite.rex.RexNode joinCondition,
TemporalTableSourceSpec temporalTableSourceSpec,
Map<Integer,LookupJoinUtil.LookupKey> lookupKeys,
List<org.apache.calcite.rex.RexNode> projectionOnTemporalTable,
org.apache.calcite.rex.RexNode filterOnTemporalTable,
boolean lookupKeyContainsPrimaryKey,
boolean upsertMaterialize,
LookupJoinUtil.AsyncLookupOptions asyncLookupOptions,
LookupJoinUtil.RetryLookupOptions retryOptions,
ChangelogMode inputChangelogMode,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecMatch(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
MatchSpec matchSpec,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecMatch(ReadableConfig tableConfig,
MatchSpec matchSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecMiniBatchAssigner(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
MiniBatchInterval miniBatchInterval,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecMiniBatchAssigner(ReadableConfig tableConfig,
MiniBatchInterval miniBatchInterval,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecOverAggregate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
OverSpec overSpec,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecOverAggregate(ReadableConfig tableConfig,
OverSpec overSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecPythonCalc(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
List<org.apache.calcite.rex.RexNode> projection,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecPythonCalc(ReadableConfig tableConfig,
List<org.apache.calcite.rex.RexNode> projection,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecPythonCorrelate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
FlinkJoinType joinType,
org.apache.calcite.rex.RexNode invocation,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecPythonCorrelate(ReadableConfig tableConfig,
FlinkJoinType joinType,
org.apache.calcite.rex.RexCall invocation,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecPythonGroupAggregate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean generateUpdateBefore,
boolean needRetraction,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecPythonGroupAggregate(ReadableConfig tableConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean generateUpdateBefore,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecPythonGroupTableAggregate(ReadableConfig tableConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean generateUpdateBefore,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecPythonGroupWindowAggregate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
NamedWindowProperty[] namedWindowProperties,
boolean generateUpdateBefore,
boolean needRetraction,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecPythonGroupWindowAggregate(ReadableConfig tableConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
NamedWindowProperty[] namedWindowProperties,
boolean generateUpdateBefore,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecPythonOverAggregate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
OverSpec overSpec,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecPythonOverAggregate(ReadableConfig tableConfig,
OverSpec overSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecRank(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
RankType rankType,
PartitionSpec partitionSpec,
SortSpec sortSpec,
RankRange rankRange,
RankProcessStrategy rankStrategy,
boolean outputRankNumber,
boolean generateUpdateBefore,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecRank(ReadableConfig tableConfig,
RankType rankType,
PartitionSpec partitionSpec,
SortSpec sortSpec,
RankRange rankRange,
RankProcessStrategy rankStrategy,
boolean outputRankNumber,
boolean generateUpdateBefore,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecSort(ReadableConfig tableConfig,
SortSpec sortSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecSortLimit(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
SortSpec sortSpec,
ConstantRankRange rankRange,
RankProcessStrategy rankStrategy,
boolean generateUpdateBefore,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecSortLimit(ReadableConfig tableConfig,
SortSpec sortSpec,
long limitStart,
long limitEnd,
RankProcessStrategy rankStrategy,
boolean generateUpdateBefore,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecTableSourceScan(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
DynamicTableSourceSpec tableSourceSpec,
RowType outputType,
String description) |
StreamExecTableSourceScan(ReadableConfig tableConfig,
DynamicTableSourceSpec tableSourceSpec,
RowType outputType,
String description) |
StreamExecTemporalJoin(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
JoinSpec joinSpec,
boolean isTemporalTableFunctionJoin,
int leftTimeAttributeIndex,
int rightTimeAttributeIndex,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecTemporalJoin(ReadableConfig tableConfig,
JoinSpec joinSpec,
boolean isTemporalTableFunctionJoin,
int leftTimeAttributeIndex,
int rightTimeAttributeIndex,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
StreamExecTemporalSort(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
SortSpec sortSpec,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecTemporalSort(ReadableConfig tableConfig,
SortSpec sortSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecUnion(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecUnion(ReadableConfig tableConfig,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecValues(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
List<List<RexLiteral>> tuples,
RowType outputType,
String description) |
StreamExecValues(ReadableConfig tableConfig,
List<List<RexLiteral>> tuples,
RowType outputType,
String description) |
StreamExecWatermarkAssigner(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
org.apache.calcite.rex.RexNode watermarkExpr,
int rowtimeFieldIndex,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecWatermarkAssigner(ReadableConfig tableConfig,
org.apache.calcite.rex.RexNode watermarkExpr,
int rowtimeFieldIndex,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecWindowAggregate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
WindowingStrategy windowing,
NamedWindowProperty[] namedWindowProperties,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecWindowAggregate(ReadableConfig tableConfig,
int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
WindowingStrategy windowing,
NamedWindowProperty[] namedWindowProperties,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecWindowDeduplicate(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
int[] partitionKeys,
int orderKey,
boolean keepLastRow,
WindowingStrategy windowing,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecWindowDeduplicate(ReadableConfig tableConfig,
int[] partitionKeys,
int orderKey,
boolean keepLastRow,
WindowingStrategy windowing,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecWindowJoin(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
JoinSpec joinSpec,
WindowingStrategy leftWindowing,
WindowingStrategy rightWindowing,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecWindowJoin(ReadableConfig tableConfig,
JoinSpec joinSpec,
WindowingStrategy leftWindowing,
WindowingStrategy rightWindowing,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
StreamExecWindowRank(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
RankType rankType,
PartitionSpec partitionSpec,
SortSpec sortSpec,
RankRange rankRange,
boolean outputRankNumber,
WindowingStrategy windowing,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecWindowRank(ReadableConfig tableConfig,
RankType rankType,
PartitionSpec partitionSpec,
SortSpec sortSpec,
RankRange rankRange,
boolean outputRankNumber,
WindowingStrategy windowing,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecWindowTableFunction(int id,
ExecNodeContext context,
ReadableConfig persistedConfig,
TimeAttributeWindowingStrategy windowingStrategy,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecWindowTableFunction(ReadableConfig tableConfig,
TimeAttributeWindowingStrategy windowingStrategy,
InputProperty inputProperty,
RowType outputType,
String description) |
Modifier and Type | Method and Description |
---|---|
static SortMergeJoinFunction |
SorMergeJoinOperatorUtil.getSortMergeJoinFunction(ClassLoader classLoader,
ExecNodeConfig config,
FlinkJoinType joinType,
RowType leftType,
RowType rightType,
int[] leftKeys,
int[] rightKeys,
RowType keyType,
boolean leftIsSmaller,
boolean[] filterNulls,
GeneratedJoinCondition condFunc,
double externalBufferMemRatio) |
Modifier and Type | Method and Description |
---|---|
static RowType |
RowTypeUtils.projectRowType(RowType rowType,
int[] projection)
|
Modifier and Type | Method and Description |
---|---|
static RowType |
RowTypeUtils.projectRowType(RowType rowType,
int[] projection)
|
Modifier and Type | Method and Description |
---|---|
static ArrowReader |
ArrowUtils.createArrowReader(org.apache.arrow.vector.VectorSchemaRoot root,
RowType rowType)
Creates an
ArrowReader for the specified VectorSchemaRoot . |
static ArrowWriter<RowData> |
ArrowUtils.createRowDataArrowWriter(org.apache.arrow.vector.VectorSchemaRoot root,
RowType rowType)
Creates an
ArrowWriter for the specified VectorSchemaRoot . |
static org.apache.arrow.vector.types.pojo.Schema |
ArrowUtils.toArrowSchema(RowType rowType)
Returns the Arrow schema of the specified type.
|
Modifier and Type | Field and Description |
---|---|
protected RowType |
ArrowSerializer.inputType
The input RowType.
|
protected RowType |
ArrowSerializer.outputType
The output RowType.
|
Constructor and Description |
---|
ArrowSerializer(RowType inputType,
RowType outputType) |
Constructor and Description |
---|
MiniBatchGroupAggFunction(GeneratedAggsHandleFunction genAggsHandler,
GeneratedRecordEqualiser genRecordEqualiser,
LogicalType[] accTypes,
RowType inputType,
int indexOfCountStar,
boolean generateUpdateBefore,
long stateRetentionTime)
Creates a
MiniBatchGroupAggFunction . |
Constructor and Description |
---|
DynamicFilteringDataCollectorOperator(RowType dynamicFilteringFieldType,
List<Integer> dynamicFilteringFieldIndices,
long threshold,
OperatorEventGateway operatorEventGateway) |
DynamicFilteringDataCollectorOperatorFactory(RowType dynamicFilteringFieldType,
List<Integer> dynamicFilteringFieldIndices,
long threshold) |
Modifier and Type | Method and Description |
---|---|
static HashJoinOperator |
HashJoinOperator.newHashJoinOperator(HashJoinType type,
boolean leftIsBuild,
GeneratedJoinCondition condFuncCode,
boolean reverseJoinFunction,
boolean[] filterNullKeys,
GeneratedProjection buildProjectionCode,
GeneratedProjection probeProjectionCode,
boolean tryDistinctBuildRow,
int buildRowSize,
long buildRowCount,
long probeRowCount,
RowType keyType,
SortMergeJoinFunction sortMergeJoinFunction) |
Modifier and Type | Field and Description |
---|---|
protected RowType |
AbstractEmbeddedStatelessFunctionOperator.inputType
The input logical type.
|
protected RowType |
AbstractStatelessFunctionOperator.inputType
The input logical type.
|
protected RowType |
AbstractEmbeddedStatelessFunctionOperator.udfInputType
The user-defined function input logical type.
|
protected RowType |
AbstractStatelessFunctionOperator.udfInputType
The user-defined function input logical type.
|
protected RowType |
AbstractEmbeddedStatelessFunctionOperator.udfOutputType
The user-defined function output logical type.
|
protected RowType |
AbstractStatelessFunctionOperator.udfOutputType
The user-defined function output logical type.
|
Modifier and Type | Method and Description |
---|---|
abstract FlinkFnApi.CoderInfoDescriptor |
AbstractStatelessFunctionOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
abstract FlinkFnApi.CoderInfoDescriptor |
AbstractStatelessFunctionOperator.createOutputCoderInfoDescriptor(RowType runnerOutType) |
Constructor and Description |
---|
AbstractEmbeddedStatelessFunctionOperator(Configuration config,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
int[] udfInputOffsets) |
AbstractStatelessFunctionOperator(Configuration config,
RowType inputType,
RowType udfInputType,
RowType udfOutputType) |
Modifier and Type | Field and Description |
---|---|
protected RowType |
AbstractPythonStreamAggregateOperator.inputType
The input logical type.
|
protected RowType |
AbstractPythonStreamAggregateOperator.outputType
The output logical type.
|
protected RowType |
AbstractPythonStreamAggregateOperator.userDefinedFunctionInputType
The user-defined function input logical type.
|
protected RowType |
AbstractPythonStreamAggregateOperator.userDefinedFunctionOutputType
The user-defined function output logical type.
|
Modifier and Type | Method and Description |
---|---|
RowType |
PythonStreamGroupWindowAggregateOperator.createUserDefinedFunctionInputType() |
RowType |
AbstractPythonStreamGroupAggregateOperator.createUserDefinedFunctionInputType() |
abstract RowType |
AbstractPythonStreamAggregateOperator.createUserDefinedFunctionInputType() |
RowType |
PythonStreamGroupWindowAggregateOperator.createUserDefinedFunctionOutputType() |
RowType |
AbstractPythonStreamGroupAggregateOperator.createUserDefinedFunctionOutputType() |
abstract RowType |
AbstractPythonStreamAggregateOperator.createUserDefinedFunctionOutputType() |
protected RowType |
AbstractPythonStreamAggregateOperator.getKeyType() |
Modifier and Type | Method and Description |
---|---|
FlinkFnApi.CoderInfoDescriptor |
PythonStreamGroupWindowAggregateOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
FlinkFnApi.CoderInfoDescriptor |
AbstractPythonStreamAggregateOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
FlinkFnApi.CoderInfoDescriptor |
PythonStreamGroupWindowAggregateOperator.createOutputCoderInfoDescriptor(RowType runnerOutType) |
FlinkFnApi.CoderInfoDescriptor |
AbstractPythonStreamAggregateOperator.createOutputCoderInfoDescriptor(RowType runnerOutType) |
static <K,W extends Window> |
PythonStreamGroupWindowAggregateOperator.createSessionGroupWindowAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean generateUpdateBefore,
boolean countStarInserted,
int inputTimeFieldIndex,
WindowAssigner<W> windowAssigner,
boolean isRowTime,
long gap,
long allowedLateness,
NamedWindowProperty[] namedProperties,
java.time.ZoneId shiftTimeZone) |
static <K,W extends Window> |
PythonStreamGroupWindowAggregateOperator.createSlidingGroupWindowAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean generateUpdateBefore,
boolean countStarInserted,
int inputTimeFieldIndex,
WindowAssigner<W> windowAssigner,
boolean isRowTime,
boolean isTimeWindow,
long size,
long slide,
long allowedLateness,
NamedWindowProperty[] namedProperties,
java.time.ZoneId shiftTimeZone) |
static <K,W extends Window> |
PythonStreamGroupWindowAggregateOperator.createTumblingGroupWindowAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean generateUpdateBefore,
boolean countStarInserted,
int inputTimeFieldIndex,
WindowAssigner<W> windowAssigner,
boolean isRowTime,
boolean isTimeWindow,
long size,
long allowedLateness,
NamedWindowProperty[] namedProperties,
java.time.ZoneId shiftTimeZone) |
Constructor and Description |
---|
AbstractPythonStreamAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean generateUpdateBefore) |
AbstractPythonStreamGroupAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean generateUpdateBefore,
long minRetentionTime,
long maxRetentionTime) |
PythonStreamGroupAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean countStarInserted,
boolean generateUpdateBefore,
long minRetentionTime,
long maxRetentionTime) |
PythonStreamGroupTableAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean generateUpdateBefore,
long minRetentionTime,
long maxRetentionTime) |
PythonStreamGroupWindowAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean generateUpdateBefore,
boolean countStarInserted,
int inputTimeFieldIndex,
WindowAssigner<W> windowAssigner,
FlinkFnApi.GroupWindow.WindowType windowType,
boolean isRowTime,
boolean isTimeWindow,
long size,
long slide,
long gap,
long allowedLateness,
NamedWindowProperty[] namedProperties,
java.time.ZoneId shiftTimeZone) |
Modifier and Type | Method and Description |
---|---|
FlinkFnApi.CoderInfoDescriptor |
AbstractArrowPythonAggregateFunctionOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
FlinkFnApi.CoderInfoDescriptor |
AbstractArrowPythonAggregateFunctionOperator.createOutputCoderInfoDescriptor(RowType runnerOutType) |
Constructor and Description |
---|
AbstractArrowPythonAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
GeneratedProjection udafInputGeneratedProjection) |
Modifier and Type | Method and Description |
---|---|
FlinkFnApi.CoderInfoDescriptor |
BatchArrowPythonOverWindowAggregateFunctionOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
Constructor and Description |
---|
BatchArrowPythonGroupAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
GeneratedProjection inputGeneratedProjection,
GeneratedProjection groupKeyGeneratedProjection,
GeneratedProjection groupSetGeneratedProjection) |
BatchArrowPythonGroupWindowAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
int inputTimeFieldIndex,
int maxLimitSize,
long windowSize,
long slideSize,
int[] namedProperties,
GeneratedProjection inputGeneratedProjection,
GeneratedProjection groupKeyGeneratedProjection,
GeneratedProjection groupSetGeneratedProjection) |
BatchArrowPythonOverWindowAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
long[] lowerBoundary,
long[] upperBoundary,
boolean[] isRangeWindows,
int[] aggWindowIndex,
int inputTimeFieldIndex,
boolean asc,
GeneratedProjection inputGeneratedProjection,
GeneratedProjection groupKeyGeneratedProjection,
GeneratedProjection groupSetGeneratedProjection) |
Constructor and Description |
---|
AbstractStreamArrowPythonBoundedRangeOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
int inputTimeFieldIndex,
long lowerBoundary,
GeneratedProjection inputGeneratedProjection) |
AbstractStreamArrowPythonBoundedRowsOperator(Configuration config,
long minRetentionTime,
long maxRetentionTime,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
int inputTimeFieldIndex,
long lowerBoundary,
GeneratedProjection inputGeneratedProjection) |
AbstractStreamArrowPythonOverWindowAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
int inputTimeFieldIndex,
long lowerBoundary,
GeneratedProjection inputGeneratedProjection) |
StreamArrowPythonGroupWindowAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
int inputTimeFieldIndex,
WindowAssigner<W> windowAssigner,
Trigger<W> trigger,
long allowedLateness,
NamedWindowProperty[] namedProperties,
java.time.ZoneId shiftTimeZone,
GeneratedProjection generatedProjection) |
StreamArrowPythonProcTimeBoundedRangeOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
int inputTimeFieldIndex,
long lowerBoundary,
GeneratedProjection generatedProjection) |
StreamArrowPythonProcTimeBoundedRowsOperator(Configuration config,
long minRetentionTime,
long maxRetentionTime,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
int inputTimeFieldIndex,
long lowerBoundary,
GeneratedProjection inputGeneratedProjection) |
StreamArrowPythonRowTimeBoundedRangeOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
int inputTimeFieldIndex,
long lowerBoundary,
GeneratedProjection generatedProjection) |
StreamArrowPythonRowTimeBoundedRowsOperator(Configuration config,
long minRetentionTime,
long maxRetentionTime,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
int inputTimeFieldIndex,
long lowerBoundary,
GeneratedProjection inputGeneratedProjection) |
Modifier and Type | Method and Description |
---|---|
FlinkFnApi.CoderInfoDescriptor |
PythonScalarFunctionOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
FlinkFnApi.CoderInfoDescriptor |
PythonScalarFunctionOperator.createOutputCoderInfoDescriptor(RowType runnerOutType) |
Constructor and Description |
---|
AbstractPythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
GeneratedProjection udfInputGeneratedProjection,
GeneratedProjection forwardedFieldGeneratedProjection) |
EmbeddedPythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
int[] udfInputOffsets,
GeneratedProjection forwardedFieldGeneratedProjection) |
PythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
GeneratedProjection udfInputGeneratedProjection,
GeneratedProjection forwardedFieldGeneratedProjection) |
Modifier and Type | Method and Description |
---|---|
FlinkFnApi.CoderInfoDescriptor |
ArrowPythonScalarFunctionOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
FlinkFnApi.CoderInfoDescriptor |
ArrowPythonScalarFunctionOperator.createOutputCoderInfoDescriptor(RowType runnerOutputType) |
Constructor and Description |
---|
ArrowPythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
GeneratedProjection udfInputGeneratedProjection,
GeneratedProjection forwardedFieldGeneratedProjection) |
Modifier and Type | Method and Description |
---|---|
FlinkFnApi.CoderInfoDescriptor |
PythonTableFunctionOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
FlinkFnApi.CoderInfoDescriptor |
PythonTableFunctionOperator.createOutputCoderInfoDescriptor(RowType runnerOutType) |
Constructor and Description |
---|
EmbeddedPythonTableFunctionOperator(Configuration config,
PythonFunctionInfo tableFunction,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
FlinkJoinType joinType,
int[] udfInputOffsets) |
PythonTableFunctionOperator(Configuration config,
PythonFunctionInfo tableFunction,
RowType inputType,
RowType udfInputType,
RowType udfOutputType,
FlinkJoinType joinType,
GeneratedProjection udtfInputGeneratedProjection) |
Modifier and Type | Method and Description |
---|---|
RowType |
InternalTypeInfo.toRowType() |
Modifier and Type | Method and Description |
---|---|
static <T> RowDataSerializer |
InternalSerializers.create(RowType type)
Creates a
TypeSerializer for internal data structures of the given RowType . |
static InternalTypeInfo<RowData> |
InternalTypeInfo.of(RowType type)
Creates type information for a
RowType represented by internal data structures. |
FlinkFnApi.Schema.FieldType |
PythonTypeUtils.LogicalTypeToProtoTypeConverter.visit(RowType rowType) |
Constructor and Description |
---|
RowDataSerializer(RowType rowType) |
Modifier and Type | Method and Description |
---|---|
static RowType |
RowType.of(boolean isNullable,
LogicalType... types) |
static RowType |
RowType.of(boolean nullable,
LogicalType[] types,
String[] names) |
static RowType |
RowType.of(LogicalType... types) |
static RowType |
RowType.of(LogicalType[] types,
String[] names) |
Modifier and Type | Method and Description |
---|---|
R |
LogicalTypeVisitor.visit(RowType rowType) |
Modifier and Type | Method and Description |
---|---|
static RowType |
LogicalTypeUtils.renameRowFields(RowType rowType,
List<String> newFieldNames)
Renames the fields of the given
RowType . |
static RowType |
LogicalTypeUtils.toRowType(LogicalType t)
Converts any logical type to a row type.
|
Modifier and Type | Method and Description |
---|---|
static RowType |
LogicalTypeUtils.renameRowFields(RowType rowType,
List<String> newFieldNames)
Renames the fields of the given
RowType . |
LogicalType |
LogicalTypeDuplicator.visit(RowType rowType) |
R |
LogicalTypeDefaultVisitor.visit(RowType rowType) |
Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.