Modifier and Type | Method and Description |
---|---|
JdbcRowConverter |
PostgresDialect.getRowConverter(RowType rowType) |
JdbcRowConverter |
JdbcDialect.getRowConverter(RowType rowType)
Get converter that convert jdbc object and Flink internal object each other.
|
JdbcRowConverter |
MySQLDialect.getRowConverter(RowType rowType) |
Modifier and Type | Field and Description |
---|---|
protected RowType |
AbstractJdbcRowConverter.rowType |
Constructor and Description |
---|
AbstractJdbcRowConverter(RowType rowType) |
DerbyRowConverter(RowType rowType) |
MySQLRowConverter(RowType rowType) |
PostgresRowConverter(RowType rowType) |
Constructor and Description |
---|
JdbcRowDataLookupFunction(JdbcOptions options,
JdbcLookupOptions lookupOptions,
String[] fieldNames,
DataType[] fieldTypes,
String[] keyNames,
RowType rowType) |
Constructor and Description |
---|
HiveBulkFormatAdapter(JobConfWrapper jobConfWrapper,
List<String> partitionKeys,
String[] fieldNames,
DataType[] fieldTypes,
String hiveVersion,
RowType producedRowType,
boolean useMapRedReader) |
HiveCompactReaderFactory(org.apache.hadoop.hive.metastore.api.StorageDescriptor sd,
Properties properties,
org.apache.hadoop.mapred.JobConf jobConf,
CatalogTable catalogTable,
String hiveVersion,
RowType producedRowType,
boolean useMapRedReader) |
Modifier and Type | Method and Description |
---|---|
static AvroToRowDataConverters.AvroToRowDataConverter |
AvroToRowDataConverters.createRowConverter(RowType rowType) |
Constructor and Description |
---|
AvroRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> typeInfo)
Creates a Avro deserialization schema for the given logical type.
|
AvroRowDataSerializationSchema(RowType rowType)
Creates an Avro serialization schema with the given record row type.
|
AvroRowDataSerializationSchema(RowType rowType,
SerializationSchema<org.apache.avro.generic.GenericRecord> nestedSchema,
RowDataToAvroConverters.RowDataToAvroConverter runtimeConverter)
Creates an Avro serialization schema with the given record row type, nested schema and
runtime converters.
|
Modifier and Type | Method and Description |
---|---|
static RowType |
DebeziumAvroSerializationSchema.createDebeziumAvroRowType(DataType dataType) |
static RowType |
DebeziumAvroDeserializationSchema.createDebeziumAvroRowType(DataType databaseSchema) |
Constructor and Description |
---|
DebeziumAvroDeserializationSchema(RowType rowType,
TypeInformation<RowData> producedTypeInfo,
String schemaRegistryUrl) |
DebeziumAvroDeserializationSchema(RowType rowType,
TypeInformation<RowData> producedTypeInfo,
String schemaRegistryUrl,
Map<String,?> registryConfigs) |
DebeziumAvroSerializationSchema(RowType rowType,
String schemaRegistryUrl,
String schemaRegistrySubject) |
DebeziumAvroSerializationSchema(RowType rowType,
String schemaRegistryUrl,
String schemaRegistrySubject,
Map<String,?> registryConfigs) |
Modifier and Type | Method and Description |
---|---|
static org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema |
CsvRowSchemaConverter.convert(RowType rowType)
Convert
RowType to CsvSchema . |
static RowDataToCsvConverters.RowDataToCsvConverter |
RowDataToCsvConverters.createRowConverter(RowType type) |
CsvToRowDataConverters.CsvToRowDataConverter |
CsvToRowDataConverters.createRowConverter(RowType rowType,
boolean isTopLevel) |
Constructor and Description |
---|
Builder(RowType rowType)
Creates a
CsvRowDataSerializationSchema expecting the given RowType . |
Builder(RowType rowType,
TypeInformation<RowData> resultTypeInfo)
Creates a CSV deserialization schema for the given
TypeInformation with optional
parameters. |
CsvInputFormat(Path[] filePaths,
DataType[] fieldTypes,
String[] fieldNames,
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvSchema csvSchema,
RowType formatRowType,
int[] selectFields,
List<String> partitionKeys,
String defaultPartValue,
long limit,
int[] csvSelectFieldToProjectFieldMapping,
int[] csvSelectFieldToCsvFieldMapping,
boolean ignoreParseErrors) |
Modifier and Type | Method and Description |
---|---|
JsonToRowDataConverters.JsonToRowDataConverter |
JsonToRowDataConverters.createRowConverter(RowType rowType) |
Constructor and Description |
---|
JsonRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> resultTypeInfo,
boolean failOnMissingField,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
JsonRowDataSerializationSchema(RowType rowType,
TimestampFormat timestampFormat,
JsonOptions.MapNullKeyMode mapNullKeyMode,
String mapNullKeyLiteral) |
Modifier and Type | Method and Description |
---|---|
static CanalJsonDeserializationSchema.Builder |
CanalJsonDeserializationSchema.builder(RowType rowType,
TypeInformation<RowData> resultTypeInfo)
Creates A builder for building a
CanalJsonDeserializationSchema . |
Constructor and Description |
---|
CanalJsonSerializationSchema(RowType rowType,
TimestampFormat timestampFormat,
JsonOptions.MapNullKeyMode mapNullKeyMode,
String mapNullKeyLiteral) |
Constructor and Description |
---|
DebeziumJsonSerializationSchema(RowType rowType,
TimestampFormat timestampFormat,
JsonOptions.MapNullKeyMode mapNullKeyMode,
String mapNullKeyLiteral) |
Constructor and Description |
---|
MaxwellJsonDeserializationSchema(RowType rowType,
TypeInformation<RowData> resultTypeInfo,
boolean ignoreParseErrors,
TimestampFormat timestampFormatOption) |
MaxwellJsonSerializationSchema(RowType rowType,
TimestampFormat timestampFormat,
JsonOptions.MapNullKeyMode mapNullKeyMode,
String mapNullKeyLiteral) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
ParquetColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType producedRowType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Create a partitioned
ParquetColumnarRowInputFormat , the partition columns can be
generated by Path . |
Constructor and Description |
---|
ParquetColumnarRowInputFormat(Configuration hadoopConfig,
RowType projectedType,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Constructor to create parquet format without extra fields.
|
ParquetColumnarRowInputFormat(Configuration hadoopConfig,
RowType projectedType,
RowType producedType,
ColumnBatchFactory<SplitT> batchFactory,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Constructor to create parquet format with extra fields created by
ColumnBatchFactory . |
ParquetVectorizedInputFormat(SerializableConfiguration hadoopConfig,
RowType projectedType,
ColumnBatchFactory<SplitT> batchFactory,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive) |
Modifier and Type | Method and Description |
---|---|
static ParquetWriterFactory<RowData> |
ParquetRowDataBuilder.createWriterFactory(RowType rowType,
Configuration conf,
boolean utcTimestamp)
Create a parquet
BulkWriter.Factory . |
Constructor and Description |
---|
FlinkParquetBuilder(RowType rowType,
Configuration conf,
boolean utcTimestamp) |
ParquetRowDataBuilder(org.apache.parquet.io.OutputFile path,
RowType rowType,
boolean utcTimestamp) |
ParquetRowDataWriter(org.apache.parquet.io.api.RecordConsumer recordConsumer,
RowType rowType,
org.apache.parquet.schema.GroupType schema,
boolean utcTimestamp) |
Modifier and Type | Method and Description |
---|---|
static org.apache.parquet.schema.MessageType |
ParquetSchemaConverter.convertToParquetMessageType(String name,
RowType rowType) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcColumnarRowFileInputFormat.createPartitionedFormat(OrcShim<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> shim,
Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize)
Create a partitioned
OrcColumnarRowFileInputFormat , the partition columns can be
generated by split. |
Constructor and Description |
---|
OrcColumnarRowFileInputFormat(OrcShim<BatchT> shim,
Configuration hadoopConfig,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
ColumnBatchFactory<BatchT,SplitT> batchFactory,
RowType projectedOutputType) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcNoHiveColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize)
Create a partitioned
OrcColumnarRowFileInputFormat , the partition columns can be
generated by split. |
Modifier and Type | Method and Description |
---|---|
default RowType |
FileSystemFormatFactory.ReaderContext.getFormatRowType()
RowType of table that excludes partition key fields.
|
Modifier and Type | Method and Description |
---|---|
DataGeneratorContainer |
RandomGeneratorVisitor.visit(RowType rowType) |
Constructor and Description |
---|
FileSystemLookupFunction(PartitionFetcher<P> partitionFetcher,
PartitionFetcher.Context<P> fetcherContext,
PartitionReader<P,RowData> partitionReader,
RowType rowType,
int[] lookupKeys,
java.time.Duration reloadInterval) |
Modifier and Type | Method and Description |
---|---|
static RowType |
DynamicSourceUtils.createProducedType(TableSchema schema,
DynamicTableSource source)
Returns the
DataType that a source should produce as the input into the runtime. |
Modifier and Type | Method and Description |
---|---|
static RowArrowReader |
ArrowUtils.createRowArrowReader(org.apache.arrow.vector.VectorSchemaRoot root,
RowType rowType)
Creates an
ArrowReader for the specified VectorSchemaRoot . |
static ArrowWriter<Row> |
ArrowUtils.createRowArrowWriter(org.apache.arrow.vector.VectorSchemaRoot root,
RowType rowType)
Creates an
ArrowWriter for the specified VectorSchemaRoot . |
static RowDataArrowReader |
ArrowUtils.createRowDataArrowReader(org.apache.arrow.vector.VectorSchemaRoot root,
RowType rowType)
Creates an
ArrowReader for blink planner for the specified VectorSchemaRoot . |
static ArrowWriter<RowData> |
ArrowUtils.createRowDataArrowWriter(org.apache.arrow.vector.VectorSchemaRoot root,
RowType rowType)
Creates an
ArrowWriter for blink planner for the specified VectorSchemaRoot . |
static org.apache.arrow.vector.types.pojo.Schema |
ArrowUtils.toArrowSchema(RowType rowType)
Returns the Arrow schema of the specified type.
|
Modifier and Type | Field and Description |
---|---|
protected RowType |
ArrowSerializer.inputType
The input RowType.
|
protected RowType |
ArrowSerializer.outputType
The output RowType.
|
Constructor and Description |
---|
ArrowSerializer(RowType inputType,
RowType outputType) |
RowArrowSerializer(RowType inputType,
RowType outputType) |
RowDataArrowSerializer(RowType inputType,
RowType outputType) |
Modifier and Type | Field and Description |
---|---|
protected RowType |
AbstractPythonStatelessFunctionFlatMap.inputType
The input logical type.
|
protected RowType |
AbstractPythonStatelessFunctionFlatMap.outputType
The output logical type.
|
protected RowType |
AbstractPythonStatelessFunctionFlatMap.userDefinedFunctionInputType
The user-defined function input logical type.
|
protected RowType |
AbstractPythonStatelessFunctionFlatMap.userDefinedFunctionOutputType
The user-defined function output logical type.
|
Constructor and Description |
---|
AbstractPythonScalarFunctionFlatMap(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
AbstractPythonStatelessFunctionFlatMap(Configuration config,
RowType inputType,
RowType outputType,
int[] userDefinedFunctionInputOffsets) |
PythonScalarFunctionFlatMap(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
PythonTableFunctionFlatMap(Configuration config,
PythonFunctionInfo tableFunction,
RowType inputType,
RowType outputType,
int[] udtfInputOffsets,
org.apache.calcite.rel.core.JoinRelType joinType) |
Constructor and Description |
---|
ArrowPythonScalarFunctionFlatMap(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
Constructor and Description |
---|
MiniBatchGroupAggFunction(GeneratedAggsHandleFunction genAggsHandler,
GeneratedRecordEqualiser genRecordEqualiser,
LogicalType[] accTypes,
RowType inputType,
int indexOfCountStar,
boolean generateUpdateBefore,
long stateRetentionTime)
Creates a
MiniBatchGroupAggFunction . |
Modifier and Type | Method and Description |
---|---|
static HashJoinOperator |
HashJoinOperator.newHashJoinOperator(HashJoinType type,
GeneratedJoinCondition condFuncCode,
boolean reverseJoinFunction,
boolean[] filterNullKeys,
GeneratedProjection buildProjectionCode,
GeneratedProjection probeProjectionCode,
boolean tryDistinctBuildRow,
int buildRowSize,
long buildRowCount,
long probeRowCount,
RowType keyType) |
Modifier and Type | Field and Description |
---|---|
protected RowType |
AbstractStatelessFunctionOperator.inputType
The input logical type.
|
protected RowType |
AbstractStatelessFunctionOperator.outputType
The output logical type.
|
protected RowType |
AbstractStatelessFunctionOperator.userDefinedFunctionInputType
The user-defined function input logical type.
|
protected RowType |
AbstractStatelessFunctionOperator.userDefinedFunctionOutputType
The user-defined function output logical type.
|
Constructor and Description |
---|
AbstractStatelessFunctionOperator(Configuration config,
RowType inputType,
RowType outputType,
int[] userDefinedFunctionInputOffsets) |
Modifier and Type | Field and Description |
---|---|
protected RowType |
PythonStreamGroupAggregateOperator.inputType
The input logical type.
|
protected RowType |
PythonStreamGroupAggregateOperator.outputType
The output logical type.
|
protected RowType |
PythonStreamGroupAggregateOperator.userDefinedFunctionInputType
The user-defined function input logical type.
|
Modifier and Type | Method and Description |
---|---|
protected RowType |
PythonStreamGroupAggregateOperator.getKeyType() |
Constructor and Description |
---|
PythonStreamGroupAggregateOperator(Configuration config,
RowType inputType,
RowType outputType,
PythonAggregateFunctionInfo[] aggregateFunctions,
DataViewUtils.DataViewSpec[][] dataViewSpecs,
int[] grouping,
int indexOfCountStar,
boolean countStarInserted,
boolean generateUpdateBefore,
long minRetentionTime,
long maxRetentionTime) |
Constructor and Description |
---|
AbstractArrowPythonAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int[] groupingSet,
int[] udafInputOffsets) |
Constructor and Description |
---|
BatchArrowPythonGroupAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int[] groupKey,
int[] groupingSet,
int[] udafInputOffsets) |
BatchArrowPythonGroupWindowAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
int maxLimitSize,
long windowSize,
long slideSize,
int[] namedProperties,
int[] groupKey,
int[] groupingSet,
int[] udafInputOffsets) |
BatchArrowPythonOverWindowAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
long[] lowerBoundary,
long[] upperBoundary,
boolean[] isRangeWindows,
int[] aggWindowIndex,
int[] groupKey,
int[] groupingSet,
int[] udafInputOffsets,
int inputTimeFieldIndex,
boolean asc) |
Constructor and Description |
---|
AbstractStreamArrowPythonBoundedRangeOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
AbstractStreamArrowPythonBoundedRowsOperator(Configuration config,
long minRetentionTime,
long maxRetentionTime,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
AbstractStreamArrowPythonOverWindowAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
StreamArrowPythonGroupWindowAggregateFunctionOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
WindowAssigner<W> windowAssigner,
Trigger<W> trigger,
long allowedLateness,
int[] namedProperties,
int[] groupingSet,
int[] udafInputOffsets) |
StreamArrowPythonProcTimeBoundedRangeOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
StreamArrowPythonProcTimeBoundedRowsOperator(Configuration config,
long minRetentionTime,
long maxRetentionTime,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
StreamArrowPythonRowTimeBoundedRangeOperator(Configuration config,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
StreamArrowPythonRowTimeBoundedRowsOperator(Configuration config,
long minRetentionTime,
long maxRetentionTime,
PythonFunctionInfo[] pandasAggFunctions,
RowType inputType,
RowType outputType,
int inputTimeFieldIndex,
long lowerBoundary,
int[] groupingSet,
int[] udafInputOffsets) |
Constructor and Description |
---|
AbstractRowDataPythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
AbstractRowPythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
PythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
RowDataPythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
Constructor and Description |
---|
ArrowPythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
RowDataArrowPythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
Constructor and Description |
---|
AbstractPythonTableFunctionOperator(Configuration config,
PythonFunctionInfo tableFunction,
RowType inputType,
RowType outputType,
int[] udtfInputOffsets,
org.apache.calcite.rel.core.JoinRelType joinType) |
PythonTableFunctionOperator(Configuration config,
PythonFunctionInfo tableFunction,
RowType inputType,
RowType outputType,
int[] udtfInputOffsets,
org.apache.calcite.rel.core.JoinRelType joinType) |
RowDataPythonTableFunctionOperator(Configuration config,
PythonFunctionInfo tableFunction,
RowType inputType,
RowType outputType,
int[] udtfInputOffsets,
org.apache.calcite.rel.core.JoinRelType joinType) |
Constructor and Description |
---|
BeamTableStatefulPythonFunctionRunner(String taskName,
PythonEnvironmentManager environmentManager,
RowType inputType,
RowType outputType,
String functionUrn,
FlinkFnApi.UserDefinedAggregateFunctions userDefinedFunctions,
String coderUrn,
Map<String,String> jobOptions,
FlinkMetricContainer flinkMetricContainer,
KeyedStateBackend keyedStateBackend,
TypeSerializer keySerializer,
MemoryManager memoryManager,
double managedMemoryFraction) |
BeamTableStatelessPythonFunctionRunner(String taskName,
PythonEnvironmentManager environmentManager,
RowType inputType,
RowType outputType,
String functionUrn,
FlinkFnApi.UserDefinedFunctions userDefinedFunctions,
String coderUrn,
Map<String,String> jobOptions,
FlinkMetricContainer flinkMetricContainer,
MemoryManager memoryManager,
double managedMemoryFraction) |
Modifier and Type | Method and Description |
---|---|
RowType |
InternalTypeInfo.toRowType() |
Modifier and Type | Method and Description |
---|---|
static org.apache.beam.model.pipeline.v1.RunnerApi.Coder |
PythonTypeUtils.getRowCoderProto(RowType rowType,
String coderUrn) |
static InternalTypeInfo<RowData> |
InternalTypeInfo.of(RowType type)
Creates type information for a
RowType represented by internal data structures. |
FlinkFnApi.Schema.FieldType |
PythonTypeUtils.LogicalTypeToProtoTypeConverter.visit(RowType rowType) |
Constructor and Description |
---|
RowDataSerializer(RowType rowType) |
Modifier and Type | Method and Description |
---|---|
static RowType |
RowType.of(boolean isNullable,
LogicalType... types) |
static RowType |
RowType.of(boolean nullable,
LogicalType[] types,
String[] names) |
static RowType |
RowType.of(LogicalType... types) |
static RowType |
RowType.of(LogicalType[] types,
String[] names) |
Modifier and Type | Method and Description |
---|---|
R |
LogicalTypeVisitor.visit(RowType rowType) |
Modifier and Type | Method and Description |
---|---|
static RowType |
LogicalTypeUtils.renameRowFields(RowType rowType,
List<String> newFieldNames)
Renames the fields of the given
RowType . |
static RowType |
LogicalTypeUtils.toRowType(LogicalType t)
Converts any logical type to a row type.
|
Modifier and Type | Method and Description |
---|---|
static RowType |
LogicalTypeUtils.renameRowFields(RowType rowType,
List<String> newFieldNames)
Renames the fields of the given
RowType . |
R |
LogicalTypeDefaultVisitor.visit(RowType rowType) |
LogicalType |
LogicalTypeDuplicator.visit(RowType rowType) |
Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.