Modifier and Type | Method and Description |
---|---|
DataGeneratorContainer |
RandomGeneratorVisitor.visit(RowType rowType) |
Modifier and Type | Method and Description |
---|---|
JdbcRowConverter |
MySQLDialect.getRowConverter(RowType rowType) |
JdbcRowConverter |
JdbcDialect.getRowConverter(RowType rowType)
Get converter that convert jdbc object and Flink internal object each other.
|
JdbcRowConverter |
PostgresDialect.getRowConverter(RowType rowType) |
Modifier and Type | Field and Description |
---|---|
protected RowType |
AbstractJdbcRowConverter.rowType |
Constructor and Description |
---|
AbstractJdbcRowConverter(RowType rowType) |
DerbyRowConverter(RowType rowType) |
MySQLRowConverter(RowType rowType) |
PostgresRowConverter(RowType rowType) |
Constructor and Description |
---|
JdbcRowDataLookupFunction(JdbcConnectorOptions options,
JdbcLookupOptions lookupOptions,
String[] fieldNames,
DataType[] fieldTypes,
String[] keyNames,
RowType rowType) |
Constructor and Description |
---|
HiveBulkFormatAdapter(JobConfWrapper jobConfWrapper,
List<String> partitionKeys,
String[] fieldNames,
DataType[] fieldTypes,
String hiveVersion,
RowType producedRowType,
boolean useMapRedReader) |
HiveCompactReaderFactory(org.apache.hadoop.hive.metastore.api.StorageDescriptor sd,
Properties properties,
org.apache.hadoop.mapred.JobConf jobConf,
CatalogTable catalogTable,
String hiveVersion,
RowType producedRowType,
boolean useMapRedReader) |
Modifier and Type | Method and Description |
---|---|
static AvroToRowDataConverters.AvroToRowDataConverter |
AvroToRowDataConverters.createRowConverter(RowType rowType) |
Constructor and Description |
---|
AvroRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> typeInfo)
Creates a Avro deserialization schema for the given logical type.
|
AvroRowDataSerializationSchema(RowType rowType)
Creates an Avro serialization schema with the given record row type.
|
AvroRowDataSerializationSchema(RowType rowType,
SerializationSchema<org.apache.avro.generic.GenericRecord> nestedSchema,
RowDataToAvroConverters.RowDataToAvroConverter runtimeConverter)
Creates an Avro serialization schema with the given record row type, nested schema and
runtime converters.
|
Modifier and Type | Method and Description |
---|---|
static RowType |
DebeziumAvroSerializationSchema.createDebeziumAvroRowType(DataType dataType) |
static RowType |
DebeziumAvroDeserializationSchema.createDebeziumAvroRowType(DataType databaseSchema) |
Constructor and Description |
---|
DebeziumAvroDeserializationSchema(RowType rowType,
TypeInformation<RowData> producedTypeInfo,
String schemaRegistryUrl,
Map<String,?> registryConfigs) |
DebeziumAvroSerializationSchema(RowType rowType,
String schemaRegistryUrl,
String schemaRegistrySubject,
Map<String,?> registryConfigs) |
Modifier and Type | Method and Description |
---|---|
static org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema |
CsvRowSchemaConverter.convert(RowType rowType)
Convert
RowType to CsvSchema . |
static RowDataToCsvConverters.RowDataToCsvConverter |
RowDataToCsvConverters.createRowConverter(RowType type) |
CsvToRowDataConverters.CsvToRowDataConverter |
CsvToRowDataConverters.createRowConverter(RowType rowType,
boolean isTopLevel) |
Constructor and Description |
---|
Builder(RowType rowType)
Creates a
CsvRowDataSerializationSchema expecting the given RowType . |
Builder(RowType rowType,
TypeInformation<RowData> resultTypeInfo)
Creates a CSV deserialization schema for the given
TypeInformation with optional
parameters. |
CsvInputFormat(Path[] filePaths,
DataType[] fieldTypes,
String[] fieldNames,
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema csvSchema,
RowType formatRowType,
int[] selectFields,
List<String> partitionKeys,
String defaultPartValue,
long limit,
int[] csvSelectFieldToProjectFieldMapping,
int[] csvSelectFieldToCsvFieldMapping,
boolean ignoreParseErrors) |
Modifier and Type | Method and Description |
---|---|
JsonToRowDataConverters.JsonToRowDataConverter |
JsonToRowDataConverters.createRowConverter(RowType rowType) |
Constructor and Description |
---|
JsonRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> resultTypeInfo,
boolean failOnMissingField,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
JsonRowDataSerializationSchema(RowType rowType,
TimestampFormat timestampFormat,
JsonFormatOptions.MapNullKeyMode mapNullKeyMode,
String mapNullKeyLiteral,
boolean encodeDecimalAsPlainNumber) |
Constructor and Description |
---|
CanalJsonSerializationSchema(RowType rowType,
TimestampFormat timestampFormat,
JsonFormatOptions.MapNullKeyMode mapNullKeyMode,
String mapNullKeyLiteral,
boolean encodeDecimalAsPlainNumber) |
Constructor and Description |
---|
DebeziumJsonSerializationSchema(RowType rowType,
TimestampFormat timestampFormat,
JsonFormatOptions.MapNullKeyMode mapNullKeyMode,
String mapNullKeyLiteral,
boolean encodeDecimalAsPlainNumber) |
Constructor and Description |
---|
MaxwellJsonSerializationSchema(RowType rowType,
TimestampFormat timestampFormat,
JsonFormatOptions.MapNullKeyMode mapNullKeyMode,
String mapNullKeyLiteral,
boolean encodeDecimalAsPlainNumber) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
ParquetColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType producedRowType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Create a partitioned
ParquetColumnarRowInputFormat , the partition columns can be
generated by Path . |
Constructor and Description |
---|
ParquetColumnarRowInputFormat(Configuration hadoopConfig,
RowType projectedType,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Constructor to create parquet format without extra fields.
|
ParquetColumnarRowInputFormat(Configuration hadoopConfig,
RowType projectedType,
RowType producedType,
ColumnBatchFactory<SplitT> batchFactory,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Constructor to create parquet format with extra fields created by
ColumnBatchFactory . |
ParquetVectorizedInputFormat(SerializableConfiguration hadoopConfig,
RowType projectedType,
ColumnBatchFactory<SplitT> batchFactory,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive) |
Modifier and Type | Method and Description |
---|---|
static ParquetWriterFactory<RowData> |
ParquetRowDataBuilder.createWriterFactory(RowType rowType,
Configuration conf,
boolean utcTimestamp)
Create a parquet
BulkWriter.Factory . |
Constructor and Description |
---|
FlinkParquetBuilder(RowType rowType,
Configuration conf,
boolean utcTimestamp) |
ParquetRowDataBuilder(org.apache.parquet.io.OutputFile path,
RowType rowType,
boolean utcTimestamp) |
ParquetRowDataWriter(org.apache.parquet.io.api.RecordConsumer recordConsumer,
RowType rowType,
org.apache.parquet.schema.GroupType schema,
boolean utcTimestamp) |
Modifier and Type | Method and Description |
---|---|
static org.apache.parquet.schema.MessageType |
ParquetSchemaConverter.convertToParquetMessageType(String name,
RowType rowType) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcColumnarRowFileInputFormat.createPartitionedFormat(OrcShim<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> shim,
Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize)
Create a partitioned
OrcColumnarRowFileInputFormat , the partition columns can be
generated by split. |
Constructor and Description |
---|
OrcColumnarRowFileInputFormat(OrcShim<BatchT> shim,
Configuration hadoopConfig,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
ColumnBatchFactory<BatchT,SplitT> batchFactory,
RowType projectedOutputType) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcNoHiveColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize)
Create a partitioned
OrcColumnarRowFileInputFormat , the partition columns can be
generated by split. |
Constructor and Description |
---|
OrcRowColumnVector(org.apache.hadoop.hive.ql.exec.vector.StructColumnVector hiveVector,
RowType type) |
Modifier and Type | Method and Description |
---|---|
static FlinkFnApi.CoderInfoDescriptor |
ProtoUtils.createArrowTypeCoderInfoDescriptorProto(RowType rowType,
FlinkFnApi.CoderInfoDescriptor.Mode mode,
boolean separatedWithEndMessage) |
static FlinkFnApi.CoderInfoDescriptor |
ProtoUtils.createFlattenRowTypeCoderInfoDescriptorProto(RowType rowType,
FlinkFnApi.CoderInfoDescriptor.Mode mode,
boolean separatedWithEndMessage) |
static FlinkFnApi.CoderInfoDescriptor |
ProtoUtils.createOverWindowArrowTypeCoderInfoDescriptorProto(RowType rowType,
FlinkFnApi.CoderInfoDescriptor.Mode mode,
boolean separatedWithEndMessage) |
static FlinkFnApi.CoderInfoDescriptor |
ProtoUtils.createRowTypeCoderInfoDescriptorProto(RowType rowType,
FlinkFnApi.CoderInfoDescriptor.Mode mode,
boolean separatedWithEndMessage) |
Constructor and Description |
---|
RowDataFieldsKinesisPartitioner(RowType physicalType,
List<String> partitionKeys) |
RowDataFieldsKinesisPartitioner(RowType physicalType,
List<String> partitionKeys,
String delimiter) |
Modifier and Type | Method and Description |
---|---|
ResolvedExpression |
Parser.parseSqlExpression(String sqlExpression,
RowType inputRowType,
LogicalType outputType)
Entry point for parsing SQL expressions expressed as a String.
|
Modifier and Type | Method and Description |
---|---|
ResolvedExpression |
SqlExpressionResolver.resolveExpression(String sqlExpression,
RowType inputRowType,
LogicalType outputType)
Translates the given SQL expression string or throws a
ValidationException . |
Modifier and Type | Method and Description |
---|---|
default RowType |
FileSystemFormatFactory.ReaderContext.getFormatRowType()
RowType of table that excludes partition key fields.
|
Constructor and Description |
---|
FileSystemLookupFunction(PartitionFetcher<P> partitionFetcher,
PartitionFetcher.Context<P> fetcherContext,
PartitionReader<P,RowData> partitionReader,
RowType rowType,
int[] lookupKeys,
java.time.Duration reloadInterval) |
Modifier and Type | Method and Description |
---|---|
SqlExprToRexConverter |
SqlExprToRexConverterFactory.create(RowType inputRowType,
LogicalType outputType)
Creates a new instance of
SqlExprToRexConverter to convert SQL expression to RexNode . |
Modifier and Type | Method and Description |
---|---|
static RowType |
DynamicSourceUtils.createProducedType(ResolvedSchema schema,
DynamicTableSource source)
Returns the
DataType that a source should produce as the input into the runtime. |
Modifier and Type | Method and Description |
---|---|
ResolvedExpression |
ParserImpl.parseSqlExpression(String sqlExpression,
RowType inputRowType,
LogicalType outputType) |
Modifier and Type | Method and Description |
---|---|
RowType |
SourceAbilityContext.getSourceRowType() |
Modifier and Type | Method and Description |
---|---|
Optional<RowType> |
SourceAbilitySpecBase.getProducedType() |
Optional<RowType> |
SourceAbilitySpec.getProducedType()
Return the produced
RowType this the ability is applied. |
Constructor and Description |
---|
ProjectPushDownSpec(int[][] projectedFields,
RowType producedType) |
ReadingMetadataSpec(List<String> metadataKeys,
RowType producedType) |
SourceAbilityContext(org.apache.flink.table.planner.calcite.FlinkContext context,
RowType sourceRowType) |
SourceAbilitySpecBase(RowType producedType) |
SourceWatermarkSpec(boolean sourceWatermarkEnabled,
RowType producedType) |
WatermarkPushDownSpec(org.apache.calcite.rex.RexNode watermarkExpr,
long idleTimeoutMillis,
RowType producedType) |
Modifier and Type | Method and Description |
---|---|
protected RowType |
BatchExecLegacySink.checkAndConvertInputTypeIfNeeded(RowType inputRowType) |
protected RowType |
BatchExecOverAggregateBase.getInputTypeWithConstants() |
Modifier and Type | Method and Description |
---|---|
protected RowType |
BatchExecLegacySink.checkAndConvertInputTypeIfNeeded(RowType inputRowType) |
Constructor and Description |
---|
BatchExecBoundedStreamScan(DataStream<?> dataStream,
DataType sourceType,
int[] fieldIndexes,
List<String> qualifiedName,
RowType outputType,
String description) |
BatchExecCalc(List<org.apache.calcite.rex.RexNode> projection,
org.apache.calcite.rex.RexNode condition,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecCorrelate(FlinkJoinType joinType,
org.apache.calcite.rex.RexCall invocation,
org.apache.calcite.rex.RexNode condition,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecExchange(InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecExpand(List<List<org.apache.calcite.rex.RexNode>> projects,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecHashAggregate(int[] grouping,
int[] auxGrouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
RowType aggInputRowType,
boolean isMerge,
boolean isFinal,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecHashJoin(JoinSpec joinSpec,
int estimatedLeftAvgRowSize,
int estimatedRightAvgRowSize,
long estimatedLeftRowCount,
long estimatedRightRowCount,
boolean leftIsBuild,
boolean tryDistinctBuildRow,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
BatchExecHashWindowAggregate(int[] grouping,
int[] auxGrouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
int inputTimeFieldIndex,
boolean inputTimeIsDate,
PlannerNamedWindowProperty[] namedWindowProperties,
RowType aggInputRowType,
boolean enableAssignPane,
boolean isMerge,
boolean isFinal,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecLegacyTableSourceScan(TableSource<?> tableSource,
List<String> qualifiedName,
RowType outputType,
String description) |
BatchExecLookupJoin(FlinkJoinType joinType,
org.apache.calcite.rex.RexNode joinCondition,
TemporalTableSourceSpec temporalTableSourceSpec,
Map<Integer,LookupJoinUtil.LookupKey> lookupKeys,
List<org.apache.calcite.rex.RexNode> projectionOnTemporalTable,
org.apache.calcite.rex.RexNode filterOnTemporalTable,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecNestedLoopJoin(FlinkJoinType joinType,
org.apache.calcite.rex.RexNode condition,
boolean leftIsBuild,
boolean singleRowJoin,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
BatchExecOverAggregate(OverSpec overSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecOverAggregateBase(OverSpec overSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecPythonCalc(List<org.apache.calcite.rex.RexNode> projection,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecPythonCalc(List<org.apache.calcite.rex.RexNode> projection,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
BatchExecPythonCorrelate(FlinkJoinType joinType,
org.apache.calcite.rex.RexCall invocation,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecPythonGroupAggregate(int[] grouping,
int[] auxGrouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecPythonGroupWindowAggregate(int[] grouping,
int[] auxGrouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
int inputTimeFieldIndex,
PlannerNamedWindowProperty[] namedWindowProperties,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecPythonOverAggregate(OverSpec overSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecRank(int[] partitionFields,
int[] sortFields,
long rankStart,
long rankEnd,
boolean outputRankNumber,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecSort(SortSpec sortSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecSortAggregate(int[] grouping,
int[] auxGrouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
RowType aggInputRowType,
boolean isMerge,
boolean isFinal,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecSortLimit(SortSpec sortSpec,
long limitStart,
long limitEnd,
boolean isGlobal,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecSortMergeJoin(FlinkJoinType joinType,
int[] leftKeys,
int[] rightKeys,
boolean[] filterNulls,
org.apache.calcite.rex.RexNode nonEquiCondition,
boolean leftIsSmaller,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
BatchExecSortWindowAggregate(int[] grouping,
int[] auxGrouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
int inputTimeFieldIndex,
boolean inputTimeIsDate,
PlannerNamedWindowProperty[] namedWindowProperties,
RowType aggInputRowType,
boolean enableAssignPane,
boolean isMerge,
boolean isFinal,
InputProperty inputProperty,
RowType outputType,
String description) |
BatchExecTableSourceScan(DynamicTableSourceSpec tableSourceSpec,
RowType outputType,
String description) |
BatchExecUnion(List<InputProperty> inputProperties,
RowType outputType,
String description) |
BatchExecValues(List<List<RexLiteral>> tuples,
RowType outputType,
String description) |
Modifier and Type | Method and Description |
---|---|
protected abstract RowType |
CommonExecLegacySink.checkAndConvertInputTypeIfNeeded(RowType inputRowType)
Check whether the given row type is legal and do some conversion if needed.
|
Modifier and Type | Method and Description |
---|---|
protected abstract RowType |
CommonExecLegacySink.checkAndConvertInputTypeIfNeeded(RowType inputRowType)
Check whether the given row type is legal and do some conversion if needed.
|
protected void |
CommonExecLookupJoin.validateLookupKeyType(Map<Integer,LookupJoinUtil.LookupKey> lookupKeys,
RowType inputRowType,
RowType tableSourceRowType) |
Constructor and Description |
---|
CommonExecCalc(List<org.apache.calcite.rex.RexNode> projection,
org.apache.calcite.rex.RexNode condition,
Class<?> operatorBaseClass,
boolean retainHeader,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecCorrelate(FlinkJoinType joinType,
org.apache.calcite.rex.RexCall invocation,
org.apache.calcite.rex.RexNode condition,
Class<?> operatorBaseClass,
boolean retainHeader,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecExchange(int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecExpand(List<List<org.apache.calcite.rex.RexNode>> projects,
boolean retainHeader,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecLegacyTableSourceScan(TableSource<?> tableSource,
List<String> qualifiedName,
RowType outputType,
String description) |
CommonExecLookupJoin(FlinkJoinType joinType,
org.apache.calcite.rex.RexNode joinCondition,
TemporalTableSourceSpec temporalTableSourceSpec,
Map<Integer,LookupJoinUtil.LookupKey> lookupKeys,
List<org.apache.calcite.rex.RexNode> projectionOnTemporalTable,
org.apache.calcite.rex.RexNode filterOnTemporalTable,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecPythonCalc(List<org.apache.calcite.rex.RexNode> projection,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecPythonCorrelate(FlinkJoinType joinType,
org.apache.calcite.rex.RexCall invocation,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecUnion(int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
CommonExecValues(List<List<RexLiteral>> tuples,
int id,
RowType outputType,
String description) |
Modifier and Type | Method and Description |
---|---|
LogicalType[] |
SortSpec.getFieldTypes(RowType input)
Gets fields types of sort fields accoording to input type.
|
Modifier and Type | Method and Description |
---|---|
protected RowType |
StreamExecLegacySink.checkAndConvertInputTypeIfNeeded(RowType inputRowType) |
Modifier and Type | Method and Description |
---|---|
protected RowType |
StreamExecLegacySink.checkAndConvertInputTypeIfNeeded(RowType inputRowType) |
static Tuple2<Pattern<RowData,RowData>,List<String>> |
StreamExecMatch.translatePattern(MatchSpec matchSpec,
TableConfig config,
org.apache.calcite.tools.RelBuilder relBuilder,
RowType inputRowType) |
Constructor and Description |
---|
StreamExecCalc(List<org.apache.calcite.rex.RexNode> projection,
org.apache.calcite.rex.RexNode condition,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecCalc(List<org.apache.calcite.rex.RexNode> projection,
org.apache.calcite.rex.RexNode condition,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecChangelogNormalize(int[] uniqueKeys,
boolean generateUpdateBefore,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecChangelogNormalize(int[] uniqueKeys,
boolean generateUpdateBefore,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecCorrelate(FlinkJoinType joinType,
org.apache.calcite.rex.RexCall invocation,
org.apache.calcite.rex.RexNode condition,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecCorrelate(FlinkJoinType joinType,
org.apache.calcite.rex.RexNode invocation,
org.apache.calcite.rex.RexNode condition,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecDataStreamScan(DataStream<?> dataStream,
DataType sourceType,
int[] fieldIndexes,
String[] fieldNames,
List<String> qualifiedName,
RowType outputType,
String description) |
StreamExecDeduplicate(int[] uniqueKeys,
boolean isRowtime,
boolean keepLastRow,
boolean generateUpdateBefore,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecDeduplicate(int[] uniqueKeys,
boolean isRowtime,
boolean keepLastRow,
boolean generateUpdateBefore,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecDropUpdateBefore(InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecDropUpdateBefore(int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecExchange(InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecExchange(int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecExpand(List<List<org.apache.calcite.rex.RexNode>> projects,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecExpand(List<List<org.apache.calcite.rex.RexNode>> projects,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecGlobalGroupAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
RowType localAggInputRowType,
boolean generateUpdateBefore,
boolean needRetraction,
Integer indexOfCountStar,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecGlobalGroupAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
RowType localAggInputRowType,
boolean generateUpdateBefore,
boolean needRetraction,
Integer indexOfCountStar,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecGlobalWindowAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
WindowingStrategy windowing,
PlannerNamedWindowProperty[] namedWindowProperties,
InputProperty inputProperty,
RowType localAggInputRowType,
RowType outputType,
String description) |
StreamExecGlobalWindowAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
WindowingStrategy windowing,
PlannerNamedWindowProperty[] namedWindowProperties,
int id,
List<InputProperty> inputProperties,
RowType localAggInputRowType,
RowType outputType,
String description) |
StreamExecGroupAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean generateUpdateBefore,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecGroupAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean generateUpdateBefore,
boolean needRetraction,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecGroupTableAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean generateUpdateBefore,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecGroupWindowAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
PlannerNamedWindowProperty[] namedWindowProperties,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecGroupWindowAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
PlannerNamedWindowProperty[] namedWindowProperties,
boolean needRetraction,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecIncrementalGroupAggregate(int[] partialAggGrouping,
int[] finalAggGrouping,
org.apache.calcite.rel.core.AggregateCall[] partialOriginalAggCalls,
boolean[] partialAggCallNeedRetractions,
RowType partialLocalAggInputType,
boolean partialAggNeedRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecIncrementalGroupAggregate(int[] partialAggGrouping,
int[] finalAggGrouping,
org.apache.calcite.rel.core.AggregateCall[] partialOriginalAggCalls,
boolean[] partialAggCallNeedRetractions,
RowType partialLocalAggInputType,
boolean partialAggNeedRetraction,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecIntervalJoin(IntervalJoinSpec intervalJoinSpec,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
StreamExecIntervalJoin(IntervalJoinSpec intervalJoinSpec,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecJoin(JoinSpec joinSpec,
List<int[]> leftUniqueKeys,
List<int[]> rightUniqueKeys,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
StreamExecJoin(JoinSpec joinSpec,
List<int[]> leftUniqueKeys,
List<int[]> rightUniqueKeys,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecLegacyTableSourceScan(TableSource<?> tableSource,
List<String> qualifiedName,
RowType outputType,
String description) |
StreamExecLimit(ConstantRankRange rankRange,
RankProcessStrategy rankStrategy,
boolean generateUpdateBefore,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecLimit(long limitStart,
long limitEnd,
boolean generateUpdateBefore,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecLocalGroupAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecLocalGroupAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean needRetraction,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecLocalWindowAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
WindowingStrategy windowing,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecLocalWindowAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
WindowingStrategy windowing,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecLookupJoin(FlinkJoinType joinType,
org.apache.calcite.rex.RexNode joinCondition,
TemporalTableSourceSpec temporalTableSourceSpec,
Map<Integer,LookupJoinUtil.LookupKey> lookupKeys,
List<org.apache.calcite.rex.RexNode> projectionOnTemporalTable,
org.apache.calcite.rex.RexNode filterOnTemporalTable,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecLookupJoin(FlinkJoinType joinType,
org.apache.calcite.rex.RexNode joinCondition,
TemporalTableSourceSpec temporalTableSourceSpec,
Map<Integer,LookupJoinUtil.LookupKey> lookupKeys,
List<org.apache.calcite.rex.RexNode> projectionOnTemporalTable,
org.apache.calcite.rex.RexNode filterOnTemporalTable,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecMatch(MatchSpec matchSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecMatch(MatchSpec matchSpec,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecMiniBatchAssigner(MiniBatchInterval miniBatchInterval,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecMiniBatchAssigner(MiniBatchInterval miniBatchInterval,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecOverAggregate(OverSpec overSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecOverAggregate(OverSpec overSpec,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecPythonCalc(List<org.apache.calcite.rex.RexNode> projection,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecPythonCalc(List<org.apache.calcite.rex.RexNode> projection,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecPythonCorrelate(FlinkJoinType joinType,
org.apache.calcite.rex.RexCall invocation,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecPythonCorrelate(FlinkJoinType joinType,
org.apache.calcite.rex.RexNode invocation,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecPythonGroupAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean generateUpdateBefore,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecPythonGroupAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean generateUpdateBefore,
boolean needRetraction,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecPythonGroupTableAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
boolean[] aggCallNeedRetractions,
boolean generateUpdateBefore,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecPythonGroupWindowAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
PlannerNamedWindowProperty[] namedWindowProperties,
boolean generateUpdateBefore,
boolean needRetraction,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecPythonGroupWindowAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
PlannerNamedWindowProperty[] namedWindowProperties,
boolean generateUpdateBefore,
boolean needRetraction,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecPythonOverAggregate(OverSpec overSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecPythonOverAggregate(OverSpec overSpec,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecRank(RankType rankType,
PartitionSpec partitionSpec,
SortSpec sortSpec,
RankRange rankRange,
RankProcessStrategy rankStrategy,
boolean outputRankNumber,
boolean generateUpdateBefore,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecRank(RankType rankType,
PartitionSpec partitionSpec,
SortSpec sortSpec,
RankRange rankRange,
RankProcessStrategy rankStrategy,
boolean outputRankNumber,
boolean generateUpdateBefore,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecSort(SortSpec sortSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecSortLimit(SortSpec sortSpec,
ConstantRankRange rankRange,
RankProcessStrategy rankStrategy,
boolean generateUpdateBefore,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecSortLimit(SortSpec sortSpec,
long limitStart,
long limitEnd,
RankProcessStrategy rankStrategy,
boolean generateUpdateBefore,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecTableSourceScan(DynamicTableSourceSpec tableSourceSpec,
int id,
RowType outputType,
String description) |
StreamExecTableSourceScan(DynamicTableSourceSpec tableSourceSpec,
RowType outputType,
String description) |
StreamExecTemporalJoin(JoinSpec joinSpec,
boolean isTemporalTableFunctionJoin,
int leftTimeAttributeIndex,
int rightTimeAttributeIndex,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
StreamExecTemporalJoin(JoinSpec joinSpec,
boolean isTemporalTableFunctionJoin,
int leftTimeAttributeIndex,
int rightTimeAttributeIndex,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecTemporalSort(SortSpec sortSpec,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecTemporalSort(SortSpec sortSpec,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecUnion(int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecUnion(List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecValues(List<List<RexLiteral>> tuples,
int id,
RowType outputType,
String description) |
StreamExecValues(List<List<RexLiteral>> tuples,
RowType outputType,
String description) |
StreamExecWatermarkAssigner(org.apache.calcite.rex.RexNode watermarkExpr,
int rowtimeFieldIndex,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecWatermarkAssigner(org.apache.calcite.rex.RexNode watermarkExpr,
int rowtimeFieldIndex,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecWindowAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
WindowingStrategy windowing,
PlannerNamedWindowProperty[] namedWindowProperties,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecWindowAggregate(int[] grouping,
org.apache.calcite.rel.core.AggregateCall[] aggCalls,
WindowingStrategy windowing,
PlannerNamedWindowProperty[] namedWindowProperties,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecWindowJoin(JoinSpec joinSpec,
WindowingStrategy leftWindowing,
WindowingStrategy rightWindowing,
InputProperty leftInputProperty,
InputProperty rightInputProperty,
RowType outputType,
String description) |
StreamExecWindowJoin(JoinSpec joinSpec,
WindowingStrategy leftWindowing,
WindowingStrategy rightWindowing,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecWindowRank(RankType rankType,
PartitionSpec partitionSpec,
SortSpec sortSpec,
RankRange rankRange,
boolean outputRankNumber,
WindowingStrategy windowing,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecWindowRank(RankType rankType,
PartitionSpec partitionSpec,
SortSpec sortSpec,
RankRange rankRange,
boolean outputRankNumber,
WindowingStrategy windowing,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
StreamExecWindowTableFunction(TimeAttributeWindowingStrategy windowingStrategy,
Boolean emitPerRecord,
InputProperty inputProperty,
RowType outputType,
String description) |
StreamExecWindowTableFunction(TimeAttributeWindowingStrategy windowingStrategy,
Boolean emitPerRecord,
int id,
List<InputProperty> inputProperties,
RowType outputType,
String description) |
Modifier and Type | Method and Description |
---|---|
static ArrowReader |
ArrowUtils.createArrowReader(org.apache.arrow.vector.VectorSchemaRoot root,
RowType rowType)
Creates an
ArrowReader for the specified VectorSchemaRoot . |
static ArrowWriter<RowData> |
ArrowUtils.createRowDataArrowWriter(org.apache.arrow.vector.VectorSchemaRoot root,
RowType rowType)
Creates an
ArrowWriter for the specified VectorSchemaRoot . |
static org.apache.arrow.vector.types.pojo.Schema |
ArrowUtils.toArrowSchema(RowType rowType)
Returns the Arrow schema of the specified type.
|
Modifier and Type | Field and Description |
---|---|
protected RowType |
ArrowSerializer.inputType
The input RowType.
|
protected RowType |
ArrowSerializer.outputType
The output RowType.
|
Constructor and Description |
---|
ArrowSerializer(RowType inputType,
RowType outputType) |
Constructor and Description |
---|
MiniBatchGroupAggFunction(GeneratedAggsHandleFunction genAggsHandler,
GeneratedRecordEqualiser genRecordEqualiser,
LogicalType[] accTypes,
RowType inputType,
int indexOfCountStar,
boolean generateUpdateBefore,
long stateRetentionTime)
Creates a
MiniBatchGroupAggFunction . |
Modifier and Type | Method and Description |
---|---|
static HashJoinOperator |
HashJoinOperator.newHashJoinOperator(HashJoinType type,
GeneratedJoinCondition condFuncCode,
boolean reverseJoinFunction,
boolean[] filterNullKeys,
GeneratedProjection buildProjectionCode,
GeneratedProjection probeProjectionCode,
boolean tryDistinctBuildRow,
int buildRowSize,
long buildRowCount,
long probeRowCount,
RowType keyType) |
Modifier and Type | Field and Description |
---|---|
protected RowType |
AbstractStatelessFunctionOperator.inputType
The input logical type.
|
protected RowType |
AbstractStatelessFunctionOperator.outputType
The output logical type.
|
protected RowType |
AbstractStatelessFunctionOperator.userDefinedFunctionInputType
The user-defined function input logical type.
|
protected RowType |
AbstractStatelessFunctionOperator.userDefinedFunctionOutputType
The user-defined function output logical type.
|
Modifier and Type | Method and Description |
---|---|
abstract RowType |
AbstractStatelessFunctionOperator.createUserDefinedFunctionOutputType() |
Modifier and Type | Method and Description |
---|---|
abstract FlinkFnApi.CoderInfoDescriptor |
AbstractStatelessFunctionOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
abstract FlinkFnApi.CoderInfoDescriptor |
AbstractStatelessFunctionOperator.createOutputCoderInfoDescriptor(RowType runnerOutType) |
Constructor and Description |
---|
AbstractStatelessFunctionOperator(Configuration config,
RowType inputType,
RowType outputType,
int[] userDefinedFunctionInputOffsets) |
Modifier and Type | Field and Description |
---|---|
protected RowType |
AbstractPythonStreamAggregateOperator.inputType
The input logical type.
|
protected RowType |
AbstractPythonStreamAggregateOperator.outputType
The output logical type.
|
protected RowType |
AbstractPythonStreamAggregateOperator.userDefinedFunctionInputType
The user-defined function input logical type.
|
protected RowType |
AbstractPythonStreamAggregateOperator.userDefinedFunctionOutputType
The user-defined function output logical type.
|
Modifier and Type | Method and Description |
---|---|
RowType |
PythonStreamGroupWindowAggregateOperator.createUserDefinedFunctionInputType() |
RowType |
AbstractPythonStreamGroupAggregateOperator.createUserDefinedFunctionInputType() |
abstract RowType |
AbstractPythonStreamAggregateOperator.createUserDefinedFunctionInputType() |
RowType |
PythonStreamGroupWindowAggregateOperator.createUserDefinedFunctionOutputType() |
RowType |
AbstractPythonStreamGroupAggregateOperator.createUserDefinedFunctionOutputType() |
abstract RowType |
AbstractPythonStreamAggregateOperator.createUserDefinedFunctionOutputType() |
protected RowType |
AbstractPythonStreamAggregateOperator.getKeyType() |
Modifier and Type | Method and Description |
---|---|
FlinkFnApi.CoderInfoDescriptor |
PythonStreamGroupWindowAggregateOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
FlinkFnApi.CoderInfoDescriptor |
AbstractPythonStreamAggregateOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
FlinkFnApi.CoderInfoDescriptor |
PythonStreamGroupWindowAggregateOperator.createOutputCoderInfoDescriptor(RowType runnerOutType) |
FlinkFnApi.CoderInfoDescriptor |
AbstractPythonStreamAggregateOperator.createOutputCoderInfoDescriptor(RowType runnerOutType) |
Constructor and Description |
---|
AbstractPythonStreamAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewUtils.DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean generateUpdateBefore) |
AbstractPythonStreamGroupAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewUtils.DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean generateUpdateBefore,
long minRetentionTime,
long maxRetentionTime) |
PythonStreamGroupAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewUtils.DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean countStarInserted,
boolean generateUpdateBefore,
long minRetentionTime,
long maxRetentionTime) |
PythonStreamGroupTableAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewUtils.DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean generateUpdateBefore,
long minRetentionTime,
long maxRetentionTime) |
PythonStreamGroupWindowAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewUtils.DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean generateUpdateBefore,
boolean countStarInserted,
int inputTimeFieldIndex,
WindowAssigner<W> windowAssigner,
org.apache.flink.table.planner.plan.logical.LogicalWindow window,
long allowedLateness,
PlannerNamedWindowProperty[] namedProperties,
java.time.ZoneId shiftTimeZone) |
Modifier and Type | Method and Description |
---|---|
FlinkFnApi.CoderInfoDescriptor |
AbstractArrowPythonAggregateFunctionOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
FlinkFnApi.CoderInfoDescriptor |
AbstractArrowPythonAggregateFunctionOperator.createOutputCoderInfoDescriptor(RowType runnerOutType) |
Constructor and Description |
---|
AbstractArrowPythonAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int[] groupingSet,
int[] udafInputOffsets) |
Modifier and Type | Method and Description |
---|---|
RowType |
BatchArrowPythonGroupWindowAggregateFunctionOperator.createUserDefinedFunctionOutputType() |
RowType |
BatchArrowPythonGroupAggregateFunctionOperator.createUserDefinedFunctionOutputType() |
RowType |
BatchArrowPythonOverWindowAggregateFunctionOperator.createUserDefinedFunctionOutputType() |
Modifier and Type | Method and Description |
---|---|
FlinkFnApi.CoderInfoDescriptor |
BatchArrowPythonOverWindowAggregateFunctionOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
Constructor and Description |
---|
BatchArrowPythonGroupAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int[] groupKey,
int[] groupingSet,
int[] udafInputOffsets) |
BatchArrowPythonGroupWindowAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
int maxLimitSize,
long windowSize,
long slideSize,
int[] namedProperties,
int[] groupKey,
int[] groupingSet,
int[] udafInputOffsets) |
BatchArrowPythonOverWindowAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
long[] lowerBoundary,
long[] upperBoundary,
boolean[] isRangeWindows,
int[] aggWindowIndex,
int[] groupKey,
int[] groupingSet,
int[] udafInputOffsets,
int inputTimeFieldIndex,
boolean asc) |
Modifier and Type | Method and Description |
---|---|
RowType |
StreamArrowPythonGroupWindowAggregateFunctionOperator.createUserDefinedFunctionOutputType() |
RowType |
AbstractStreamArrowPythonOverWindowAggregateFunctionOperator.createUserDefinedFunctionOutputType() |
Constructor and Description |
---|
AbstractStreamArrowPythonBoundedRangeOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
AbstractStreamArrowPythonBoundedRowsOperator(Configuration config,
long minRetentionTime,
long maxRetentionTime,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
AbstractStreamArrowPythonOverWindowAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
StreamArrowPythonGroupWindowAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
WindowAssigner<W> windowAssigner,
Trigger<W> trigger,
long allowedLateness,
PlannerNamedWindowProperty[] namedProperties,
int[] groupingSet,
int[] udafInputOffsets,
java.time.ZoneId shiftTimeZone) |
StreamArrowPythonProcTimeBoundedRangeOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
StreamArrowPythonProcTimeBoundedRowsOperator(Configuration config,
long minRetentionTime,
long maxRetentionTime,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
StreamArrowPythonRowTimeBoundedRangeOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
StreamArrowPythonRowTimeBoundedRowsOperator(Configuration config,
long minRetentionTime,
long maxRetentionTime,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
Modifier and Type | Method and Description |
---|---|
RowType |
AbstractPythonScalarFunctionOperator.createUserDefinedFunctionOutputType() |
Modifier and Type | Method and Description |
---|---|
FlinkFnApi.CoderInfoDescriptor |
PythonScalarFunctionOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
FlinkFnApi.CoderInfoDescriptor |
PythonScalarFunctionOperator.createOutputCoderInfoDescriptor(RowType runnerOutType) |
Constructor and Description |
---|
AbstractPythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
PythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
Modifier and Type | Method and Description |
---|---|
FlinkFnApi.CoderInfoDescriptor |
ArrowPythonScalarFunctionOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
FlinkFnApi.CoderInfoDescriptor |
ArrowPythonScalarFunctionOperator.createOutputCoderInfoDescriptor(RowType runnerOutputType) |
Constructor and Description |
---|
ArrowPythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
Modifier and Type | Method and Description |
---|---|
RowType |
PythonTableFunctionOperator.createUserDefinedFunctionOutputType() |
Modifier and Type | Method and Description |
---|---|
FlinkFnApi.CoderInfoDescriptor |
PythonTableFunctionOperator.createInputCoderInfoDescriptor(RowType runnerInputType) |
FlinkFnApi.CoderInfoDescriptor |
PythonTableFunctionOperator.createOutputCoderInfoDescriptor(RowType runnerOutType) |
Constructor and Description |
---|
PythonTableFunctionOperator(Configuration config,
PythonFunctionInfo tableFunction,
RowType inputType,
RowType outputType,
int[] udtfInputOffsets,
FlinkJoinType joinType) |
Modifier and Type | Method and Description |
---|---|
RowType |
InternalTypeInfo.toRowType() |
Modifier and Type | Method and Description |
---|---|
static <T> RowDataSerializer |
InternalSerializers.create(RowType type)
Creates a
TypeSerializer for internal data structures of the given RowType . |
static InternalTypeInfo<RowData> |
InternalTypeInfo.of(RowType type)
Creates type information for a
RowType represented by internal data structures. |
FlinkFnApi.Schema.FieldType |
PythonTypeUtils.LogicalTypeToProtoTypeConverter.visit(RowType rowType) |
Constructor and Description |
---|
RowDataSerializer(RowType rowType) |
Modifier and Type | Method and Description |
---|---|
static RowType |
RowType.of(boolean isNullable,
LogicalType... types) |
static RowType |
RowType.of(boolean nullable,
LogicalType[] types,
String[] names) |
static RowType |
RowType.of(LogicalType... types) |
static RowType |
RowType.of(LogicalType[] types,
String[] names) |
Modifier and Type | Method and Description |
---|---|
R |
LogicalTypeVisitor.visit(RowType rowType) |
Modifier and Type | Method and Description |
---|---|
static RowType |
LogicalTypeUtils.renameRowFields(RowType rowType,
List<String> newFieldNames)
Renames the fields of the given
RowType . |
static RowType |
LogicalTypeUtils.toRowType(LogicalType t)
Converts any logical type to a row type.
|
Modifier and Type | Method and Description |
---|---|
static RowType |
LogicalTypeUtils.renameRowFields(RowType rowType,
List<String> newFieldNames)
Renames the fields of the given
RowType . |
LogicalType |
LogicalTypeDuplicator.visit(RowType rowType) |
R |
LogicalTypeDefaultVisitor.visit(RowType rowType) |
Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.