Modifier and Type | Method and Description |
---|---|
EncodingFormat<SerializationSchema<RowData>> |
AsyncDynamicTableSinkFactory.AsyncDynamicSinkContext.getEncodingFormat() |
Modifier and Type | Method and Description |
---|---|
DataGeneratorSource<RowData> |
DataGenTableSource.createSource() |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataGenerator.next() |
Modifier and Type | Class and Description |
---|---|
class |
EnrichedRowData
|
Modifier and Type | Method and Description |
---|---|
RowData |
EnrichedRowData.getRow(int pos,
int numFields) |
RowData |
RowDataPartitionComputer.projectColumnsToWrite(RowData in) |
Modifier and Type | Method and Description |
---|---|
BulkWriter<RowData> |
FileSystemTableSink.ProjectionBulkFactory.create(FSDataOutputStream out) |
TypeInformation<RowData> |
DeserializationSchemaAdapter.getProducedType() |
RecordAndPosition<RowData> |
ColumnarRowIterator.next() |
Modifier and Type | Method and Description |
---|---|
void |
SerializationSchemaAdapter.encode(RowData element,
OutputStream stream) |
static EnrichedRowData |
EnrichedRowData.from(RowData fixedRow,
List<String> producedRowFields,
List<String> mutableRowFields,
List<String> fixedRowFields)
Creates a new
EnrichedRowData with the provided fixedRow as the immutable
static row, and uses the producedRowFields , fixedRowFields and mutableRowFields arguments to compute the indexes mapping. |
LinkedHashMap<String,String> |
RowDataPartitionComputer.generatePartValues(RowData in) |
String |
FileSystemTableSink.TableBucketAssigner.getBucketId(RowData element,
BucketAssigner.Context context) |
RowData |
RowDataPartitionComputer.projectColumnsToWrite(RowData in) |
EnrichedRowData |
EnrichedRowData.replaceMutableRow(RowData mutableRow)
Replaces the mutable
RowData backing this EnrichedRowData . |
boolean |
FileSystemTableSink.TableRollingPolicy.shouldRollOnEvent(PartFileInfo<String> partFileState,
RowData element) |
Constructor and Description |
---|
EnrichedRowData(RowData fixedRow,
int[] indexMapping) |
Modifier and Type | Method and Description |
---|---|
BulkDecodingFormat<RowData> |
BulkReaderFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions)
Creates a
BulkDecodingFormat from the given context and format options. |
Modifier and Type | Method and Description |
---|---|
KinesisFirehoseDynamicSink.KinesisFirehoseDynamicSinkBuilder |
KinesisFirehoseDynamicSink.KinesisFirehoseDynamicSinkBuilder.setEncodingFormat(EncodingFormat<SerializationSchema<RowData>> encodingFormat) |
Constructor and Description |
---|
KinesisFirehoseDynamicSink(Integer maxBatchSize,
Integer maxInFlightRequests,
Integer maxBufferedRequests,
Long maxBufferSizeInBytes,
Long maxTimeInBufferMS,
Boolean failOnError,
DataType consumedDataType,
String deliveryStream,
Properties firehoseClientProperties,
EncodingFormat<SerializationSchema<RowData>> encodingFormat) |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.hbase.client.Mutation |
RowDataToMutationConverter.convertToMutation(RowData record) |
Modifier and Type | Method and Description |
---|---|
protected abstract InputFormat<RowData,?> |
AbstractHBaseDynamicTableSource.getInputFormat() |
Modifier and Type | Method and Description |
---|---|
RowData |
HBaseSerde.convertToNewRow(org.apache.hadoop.hbase.client.Result result)
Converts HBase
Result into a new RowData instance. |
RowData |
HBaseSerde.convertToReusedRow(org.apache.hadoop.hbase.client.Result result)
Converts HBase
Result into a reused RowData instance. |
RowData |
HBaseSerde.convertToRow(org.apache.hadoop.hbase.client.Result result)
Deprecated.
Use
HBaseSerde.convertToReusedRow(Result) instead. |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.hbase.client.Delete |
HBaseSerde.createDeleteMutation(RowData row)
Returns an instance of Delete that remove record from HBase table.
|
org.apache.hadoop.hbase.client.Put |
HBaseSerde.createPutMutation(RowData row)
Returns an instance of Put that writes record to HBase table.
|
Modifier and Type | Method and Description |
---|---|
protected RowData |
HBaseRowDataInputFormat.mapResultToOutType(org.apache.hadoop.hbase.client.Result res) |
Modifier and Type | Method and Description |
---|---|
InputFormat<RowData,?> |
HBaseDynamicTableSource.getInputFormat() |
Modifier and Type | Method and Description |
---|---|
protected RowData |
HBaseRowDataInputFormat.mapResultToOutType(org.apache.hadoop.hbase.client.Result res) |
Modifier and Type | Method and Description |
---|---|
protected InputFormat<RowData,?> |
HBaseDynamicTableSource.getInputFormat() |
Modifier and Type | Method and Description |
---|---|
void |
HBaseRowDataAsyncLookupFunction.eval(CompletableFuture<Collection<RowData>> future,
Object rowKey)
The invoke entry point of lookup function.
|
Modifier and Type | Method and Description |
---|---|
RowData |
JdbcRowConverter.toInternal(ResultSet resultSet)
|
RowData |
AbstractJdbcRowConverter.toInternal(ResultSet resultSet) |
Modifier and Type | Method and Description |
---|---|
void |
AbstractJdbcRowConverter.JdbcSerializationConverter.serialize(RowData rowData,
int index,
FieldNamedPreparedStatement statement) |
FieldNamedPreparedStatement |
JdbcRowConverter.toExternal(RowData rowData,
FieldNamedPreparedStatement statement)
Convert data retrieved from Flink internal RowData to JDBC Object.
|
FieldNamedPreparedStatement |
AbstractJdbcRowConverter.toExternal(RowData rowData,
FieldNamedPreparedStatement statement) |
Modifier and Type | Method and Description |
---|---|
void |
TableBufferedStatementExecutor.addToBatch(RowData record) |
void |
TableBufferReducedStatementExecutor.addToBatch(RowData record) |
void |
TableSimpleStatementExecutor.addToBatch(RowData record) |
void |
TableInsertOrUpdateStatementExecutor.addToBatch(RowData record) |
Modifier and Type | Method and Description |
---|---|
RowData |
JdbcRowDataInputFormat.nextRecord(RowData reuse)
Stores the next resultSet row in a tuple.
|
Modifier and Type | Method and Description |
---|---|
JdbcOutputFormat<RowData,?,?> |
JdbcOutputFormatBuilder.build() |
org.apache.flink.shaded.guava30.com.google.common.cache.Cache<RowData,List<RowData>> |
JdbcRowDataLookupFunction.getCache() |
org.apache.flink.shaded.guava30.com.google.common.cache.Cache<RowData,List<RowData>> |
JdbcRowDataLookupFunction.getCache() |
TypeInformation<RowData> |
JdbcRowDataInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
RowData |
JdbcRowDataInputFormat.nextRecord(RowData reuse)
Stores the next resultSet row in a tuple.
|
Modifier and Type | Method and Description |
---|---|
JdbcRowDataInputFormat.Builder |
JdbcRowDataInputFormat.Builder.setRowDataTypeInfo(TypeInformation<RowData> rowDataTypeInfo) |
JdbcOutputFormatBuilder |
JdbcOutputFormatBuilder.setRowDataTypeInfo(TypeInformation<RowData> rowDataTypeInfo) |
Modifier and Type | Method and Description |
---|---|
static PartitionKeyGenerator<RowData> |
KinesisPartitionKeyGeneratorFactory.getKinesisPartitioner(ReadableConfig tableOptions,
RowType physicalType,
List<String> partitionKeys,
ClassLoader classLoader)
Constructs the kinesis partitioner for a
targetTable based on the currently set
tableOptions . |
Modifier and Type | Method and Description |
---|---|
String |
RowDataFieldsKinesisPartitionKeyGenerator.apply(RowData element) |
Modifier and Type | Method and Description |
---|---|
KinesisDynamicSink.KinesisDynamicTableSinkBuilder |
KinesisDynamicSink.KinesisDynamicTableSinkBuilder.setEncodingFormat(EncodingFormat<SerializationSchema<RowData>> encodingFormat) |
KinesisDynamicSink.KinesisDynamicTableSinkBuilder |
KinesisDynamicSink.KinesisDynamicTableSinkBuilder.setPartitioner(PartitionKeyGenerator<RowData> partitioner) |
Constructor and Description |
---|
KinesisDynamicSink(Integer maxBatchSize,
Integer maxInFlightRequests,
Integer maxBufferedRequests,
Long maxBufferSizeInBytes,
Long maxTimeInBufferMS,
Boolean failOnError,
DataType consumedDataType,
String stream,
Properties kinesisClientProperties,
EncodingFormat<SerializationSchema<RowData>> encodingFormat,
PartitionKeyGenerator<RowData> partitioner) |
KinesisDynamicSink(Integer maxBatchSize,
Integer maxInFlightRequests,
Integer maxBufferedRequests,
Long maxBufferSizeInBytes,
Long maxTimeInBufferMS,
Boolean failOnError,
DataType consumedDataType,
String stream,
Properties kinesisClientProperties,
EncodingFormat<SerializationSchema<RowData>> encodingFormat,
PartitionKeyGenerator<RowData> partitioner) |
Modifier and Type | Method and Description |
---|---|
ExternalSystemDataReader<RowData> |
TableSinkExternalContext.createSinkRowDataReader(TestingSinkSettings sinkOptions,
DataType dataType)
Create a new split in the external system and return a data writer corresponding to the new
split.
|
Modifier and Type | Method and Description |
---|---|
ExternalSystemSplitDataWriter<RowData> |
TableSourceExternalContext.createSplitRowDataWriter(TestingSourceSettings sourceOptions,
DataType dataType)
Create a new split in the external system and return a data writer for writing
RowData corresponding to the new split. |
Modifier and Type | Method and Description |
---|---|
HiveSource<RowData> |
HiveSourceBuilder.buildWithDefaultBulkFormat()
Builds HiveSource with default built-in BulkFormat that returns records in type of RowData.
|
protected DataStream<RowData> |
HiveTableSource.getDataStream(ProviderContext providerContext,
StreamExecutionEnvironment execEnv) |
PartitionReader<P,RowData> |
FileSystemLookupFunction.getPartitionReader() |
TypeInformation<RowData> |
FileSystemLookupFunction.getResultType() |
Modifier and Type | Method and Description |
---|---|
LinkedHashMap<String,String> |
HiveRowDataPartitionComputer.generatePartValues(RowData in) |
Constructor and Description |
---|
FileSystemLookupFunction(PartitionFetcher<P> partitionFetcher,
PartitionFetcher.Context<P> fetcherContext,
PartitionReader<P,RowData> partitionReader,
RowType rowType,
int[] lookupKeys,
java.time.Duration reloadInterval) |
Modifier and Type | Method and Description |
---|---|
RowData |
HiveVectorizedOrcSplitReader.nextRecord(RowData reuse) |
RowData |
SplitReader.nextRecord(RowData reuse)
Reads the next record from the input.
|
RowData |
HiveVectorizedParquetSplitReader.nextRecord(RowData reuse) |
RowData |
HiveTableInputFormat.nextRecord(RowData reuse) |
RowData |
HiveMapredSplitReader.nextRecord(RowData reuse) |
RowData |
HiveTableFileInputFormat.nextRecord(RowData reuse) |
RowData |
HiveInputFormatPartitionReader.read(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
CompactReader<RowData> |
HiveCompactReaderFactory.create(CompactContext context) |
BulkFormat.Reader<RowData> |
HiveInputFormat.createReader(Configuration config,
HiveSourceSplit split) |
TypeInformation<RowData> |
HiveInputFormat.getProducedType() |
BulkFormat.Reader<RowData> |
HiveInputFormat.restoreReader(Configuration config,
HiveSourceSplit split) |
Modifier and Type | Method and Description |
---|---|
RowData |
HiveVectorizedOrcSplitReader.nextRecord(RowData reuse) |
RowData |
SplitReader.nextRecord(RowData reuse)
Reads the next record from the input.
|
RowData |
HiveVectorizedParquetSplitReader.nextRecord(RowData reuse) |
RowData |
HiveTableInputFormat.nextRecord(RowData reuse) |
RowData |
HiveMapredSplitReader.nextRecord(RowData reuse) |
RowData |
HiveTableFileInputFormat.nextRecord(RowData reuse) |
RowData |
HiveInputFormatPartitionReader.read(RowData reuse) |
void |
HiveVectorizedOrcSplitReader.seekToRow(long rowCount,
RowData reuse) |
default void |
SplitReader.seekToRow(long rowCount,
RowData reuse)
Seek to a particular row number.
|
void |
HiveVectorizedParquetSplitReader.seekToRow(long rowCount,
RowData reuse) |
Modifier and Type | Method and Description |
---|---|
HadoopPathBasedBulkWriter<RowData> |
HiveBulkWriterFactory.create(org.apache.hadoop.fs.Path targetPath,
org.apache.hadoop.fs.Path inProgressPath) |
java.util.function.Function<RowData,org.apache.hadoop.io.Writable> |
HiveWriterFactory.createRowDataConverter() |
Modifier and Type | Method and Description |
---|---|
RowData |
AvroRowDataDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
AvroFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
BulkDecodingFormat<RowData> |
AvroFileFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
AvroFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<BulkWriter.Factory<RowData>> |
AvroFileFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
AvroRowDataDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
AvroRowDataDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
AvroRowDataSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
AvroRowDataDeserializationSchema(DeserializationSchema<org.apache.avro.generic.GenericRecord> nestedSchema,
AvroToRowDataConverters.AvroToRowDataConverter runtimeConverter,
TypeInformation<RowData> typeInfo)
Creates a Avro deserialization schema for the given logical type.
|
AvroRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> typeInfo)
Creates a Avro deserialization schema for the given logical type.
|
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
RegistryAvroFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
RegistryAvroFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
Modifier and Type | Method and Description |
---|---|
RowData |
DebeziumAvroDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
DebeziumAvroFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
DebeziumAvroFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
DebeziumAvroDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
DebeziumAvroDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
DebeziumAvroSerializationSchema.serialize(RowData rowData) |
Modifier and Type | Method and Description |
---|---|
void |
DebeziumAvroDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Constructor and Description |
---|
DebeziumAvroDeserializationSchema(RowType rowType,
TypeInformation<RowData> producedTypeInfo,
String schemaRegistryUrl,
Map<String,?> registryConfigs) |
Modifier and Type | Method and Description |
---|---|
RowData |
CsvRowDataDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
CsvFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
BulkDecodingFormat<RowData> |
CsvFileFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
CsvFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<BulkWriter.Factory<RowData>> |
CsvFileFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
CsvRowDataDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
CsvRowDataDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
CsvRowDataSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
Builder(RowType rowType,
TypeInformation<RowData> resultTypeInfo)
Creates a CSV deserialization schema for the given
TypeInformation with optional
parameters. |
Modifier and Type | Method and Description |
---|---|
RowData |
JsonRowDataDeserializationSchema.convertToRowData(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode message) |
RowData |
JsonRowDataDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
JsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
JsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
JsonRowDataDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
JsonRowDataDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
JsonRowDataSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
JsonRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> resultTypeInfo,
boolean failOnMissingField,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
Modifier and Type | Method and Description |
---|---|
RowData |
CanalJsonDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
CanalJsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
CanalJsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
CanalJsonDecodingFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType physicalDataType,
int[][] projections) |
TypeInformation<RowData> |
CanalJsonDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
CanalJsonDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
CanalJsonSerializationSchema.serialize(RowData row) |
Modifier and Type | Method and Description |
---|---|
static CanalJsonDeserializationSchema.Builder |
CanalJsonDeserializationSchema.builder(DataType physicalDataType,
List<org.apache.flink.formats.json.canal.CanalJsonDecodingFormat.ReadableMetadata> requestedMetadata,
TypeInformation<RowData> producedTypeInfo)
Creates A builder for building a
CanalJsonDeserializationSchema . |
void |
CanalJsonDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
RowData |
DebeziumJsonDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
DebeziumJsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
DebeziumJsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
DebeziumJsonDecodingFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType physicalDataType,
int[][] projections) |
TypeInformation<RowData> |
DebeziumJsonDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
DebeziumJsonDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
DebeziumJsonSerializationSchema.serialize(RowData rowData) |
Modifier and Type | Method and Description |
---|---|
void |
DebeziumJsonDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Constructor and Description |
---|
DebeziumJsonDeserializationSchema(DataType physicalDataType,
List<org.apache.flink.formats.json.debezium.DebeziumJsonDecodingFormat.ReadableMetadata> requestedMetadata,
TypeInformation<RowData> producedTypeInfo,
boolean schemaInclude,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
Modifier and Type | Method and Description |
---|---|
RowData |
MaxwellJsonDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
MaxwellJsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
MaxwellJsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
MaxwellJsonDecodingFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType physicalDataType,
int[][] projections) |
TypeInformation<RowData> |
MaxwellJsonDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
MaxwellJsonDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
MaxwellJsonSerializationSchema.serialize(RowData element) |
Modifier and Type | Method and Description |
---|---|
void |
MaxwellJsonDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Constructor and Description |
---|
MaxwellJsonDeserializationSchema(DataType physicalDataType,
List<org.apache.flink.formats.json.maxwell.MaxwellJsonDecodingFormat.ReadableMetadata> requestedMetadata,
TypeInformation<RowData> producedTypeInfo,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
Modifier and Type | Method and Description |
---|---|
RowData |
OggJsonDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
OggJsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
OggJsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
OggJsonDecodingFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType physicalDataType) |
TypeInformation<RowData> |
OggJsonDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
OggJsonDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
OggJsonSerializationSchema.serialize(RowData rowData) |
Modifier and Type | Method and Description |
---|---|
void |
OggJsonDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Constructor and Description |
---|
OggJsonDeserializationSchema(DataType physicalDataType,
List<org.apache.flink.formats.json.ogg.OggJsonDecodingFormat.ReadableMetadata> requestedMetadata,
TypeInformation<RowData> producedTypeInfo,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
Modifier and Type | Method and Description |
---|---|
BulkDecodingFormat<RowData> |
ParquetFileFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<BulkWriter.Factory<RowData>> |
ParquetFileFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
protected ParquetVectorizedInputFormat.ParquetReaderBatch<RowData> |
ParquetColumnarRowInputFormat.createReaderBatch(WritableColumnVector[] writableVectors,
VectorizedColumnBatch columnarBatch,
Pool.Recycler<ParquetVectorizedInputFormat.ParquetReaderBatch<RowData>> recycler) |
TypeInformation<RowData> |
ParquetColumnarRowInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
ParquetColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType producedRowType,
TypeInformation<RowData> producedTypeInfo,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int batchSize,
boolean isUtcTimestamp,
boolean isCaseSensitive)
Create a partitioned
ParquetColumnarRowInputFormat , the partition columns can be
generated by Path . |
protected ParquetVectorizedInputFormat.ParquetReaderBatch<RowData> |
ParquetColumnarRowInputFormat.createReaderBatch(WritableColumnVector[] writableVectors,
VectorizedColumnBatch columnarBatch,
Pool.Recycler<ParquetVectorizedInputFormat.ParquetReaderBatch<RowData>> recycler) |
Modifier and Type | Method and Description |
---|---|
org.apache.parquet.hadoop.ParquetWriter<RowData> |
ParquetRowDataBuilder.FlinkParquetBuilder.createWriter(org.apache.parquet.io.OutputFile out) |
static ParquetWriterFactory<RowData> |
ParquetRowDataBuilder.createWriterFactory(RowType rowType,
Configuration conf,
boolean utcTimestamp)
Create a parquet
BulkWriter.Factory . |
protected org.apache.parquet.hadoop.api.WriteSupport<RowData> |
ParquetRowDataBuilder.getWriteSupport(Configuration conf) |
Modifier and Type | Method and Description |
---|---|
void |
ParquetRowDataWriter.write(RowData record)
It writes a record to Parquet.
|
Modifier and Type | Method and Description |
---|---|
RowData |
RawFormatDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
RawFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
RawFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
RawFormatDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
RawFormatDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
RawFormatSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
RawFormatDeserializationSchema(LogicalType deserializedType,
TypeInformation<RowData> producedTypeInfo,
String charsetName,
boolean isBigEndian) |
Modifier and Type | Method and Description |
---|---|
RowData |
OrcColumnarRowSplitReader.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
BulkDecodingFormat<RowData> |
OrcFileFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<BulkWriter.Factory<RowData>> |
OrcFileFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
AbstractOrcFileInputFormat.OrcReaderBatch<RowData,BatchT> |
OrcColumnarRowInputFormat.createReaderBatch(SplitT split,
OrcVectorizedBatchWrapper<BatchT> orcBatch,
Pool.Recycler<AbstractOrcFileInputFormat.OrcReaderBatch<RowData,BatchT>> recycler,
int batchSize) |
TypeInformation<RowData> |
OrcColumnarRowInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
RowData |
OrcColumnarRowSplitReader.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcColumnarRowInputFormat.createPartitionedFormat(OrcShim<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> shim,
Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
java.util.function.Function<RowType,TypeInformation<RowData>> rowTypeInfoFactory)
Create a partitioned
OrcColumnarRowInputFormat , the partition columns can be
generated by split. |
AbstractOrcFileInputFormat.OrcReaderBatch<RowData,BatchT> |
OrcColumnarRowInputFormat.createReaderBatch(SplitT split,
OrcVectorizedBatchWrapper<BatchT> orcBatch,
Pool.Recycler<AbstractOrcFileInputFormat.OrcReaderBatch<RowData,BatchT>> recycler,
int batchSize) |
Constructor and Description |
---|
OrcColumnarRowInputFormat(OrcShim<BatchT> shim,
Configuration hadoopConfig,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
ColumnBatchFactory<BatchT,SplitT> batchFactory,
TypeInformation<RowData> producedTypeInfo) |
Modifier and Type | Method and Description |
---|---|
BulkWriter<RowData> |
OrcNoHiveBulkWriterFactory.create(FSDataOutputStream out) |
Modifier and Type | Method and Description |
---|---|
static <SplitT extends FileSourceSplit> |
OrcNoHiveColumnarRowInputFormat.createPartitionedFormat(Configuration hadoopConfig,
RowType tableType,
List<String> partitionKeys,
PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
java.util.function.Function<RowType,TypeInformation<RowData>> rowTypeInfoFactory)
Create a partitioned
OrcColumnarRowInputFormat , the partition columns can be
generated by split. |
Modifier and Type | Method and Description |
---|---|
void |
RowDataVectorizer.vectorize(RowData row,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
Modifier and Type | Field and Description |
---|---|
protected DecodingFormat<DeserializationSchema<RowData>> |
KafkaDynamicSource.keyDecodingFormat
Optional format for decoding keys from Kafka.
|
protected EncodingFormat<SerializationSchema<RowData>> |
KafkaDynamicSink.keyEncodingFormat
Optional format for encoding keys to Kafka.
|
protected FlinkKafkaPartitioner<RowData> |
KafkaDynamicSink.partitioner
Partitioner to select Kafka partition for each item.
|
protected DecodingFormat<DeserializationSchema<RowData>> |
KafkaDynamicSource.valueDecodingFormat
Format for decoding values from Kafka.
|
protected EncodingFormat<SerializationSchema<RowData>> |
KafkaDynamicSink.valueEncodingFormat
Format for encoding values to Kafka.
|
protected WatermarkStrategy<RowData> |
KafkaDynamicSource.watermarkStrategy
Watermark strategy that is used to generate per-partition watermark.
|
Modifier and Type | Method and Description |
---|---|
protected KafkaSource<RowData> |
KafkaDynamicSource.createKafkaSource(DeserializationSchema<RowData> keyDeserialization,
DeserializationSchema<RowData> valueDeserialization,
TypeInformation<RowData> producedTypeInfo) |
DeserializationSchema<RowData> |
UpsertKafkaDynamicTableFactory.DecodingFormatWrapper.createRuntimeDecoder(DynamicTableSource.Context context,
DataType producedDataType) |
SerializationSchema<RowData> |
UpsertKafkaDynamicTableFactory.EncodingFormatWrapper.createRuntimeEncoder(DynamicTableSink.Context context,
DataType consumedDataType) |
Modifier and Type | Method and Description |
---|---|
void |
KafkaDynamicSource.applyWatermark(WatermarkStrategy<RowData> watermarkStrategy) |
protected KafkaSource<RowData> |
KafkaDynamicSource.createKafkaSource(DeserializationSchema<RowData> keyDeserialization,
DeserializationSchema<RowData> valueDeserialization,
TypeInformation<RowData> producedTypeInfo) |
protected KafkaSource<RowData> |
KafkaDynamicSource.createKafkaSource(DeserializationSchema<RowData> keyDeserialization,
DeserializationSchema<RowData> valueDeserialization,
TypeInformation<RowData> producedTypeInfo) |
protected KafkaSource<RowData> |
KafkaDynamicSource.createKafkaSource(DeserializationSchema<RowData> keyDeserialization,
DeserializationSchema<RowData> valueDeserialization,
TypeInformation<RowData> producedTypeInfo) |
protected KafkaDynamicSink |
KafkaDynamicTableFactory.createKafkaTableSink(DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
DeliveryGuarantee deliveryGuarantee,
Integer parallelism,
String transactionalIdPrefix) |
protected KafkaDynamicSink |
KafkaDynamicTableFactory.createKafkaTableSink(DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
DeliveryGuarantee deliveryGuarantee,
Integer parallelism,
String transactionalIdPrefix) |
protected KafkaDynamicSink |
KafkaDynamicTableFactory.createKafkaTableSink(DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
DeliveryGuarantee deliveryGuarantee,
Integer parallelism,
String transactionalIdPrefix) |
protected KafkaDynamicSource |
KafkaDynamicTableFactory.createKafkaTableSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
String tableIdentifier) |
protected KafkaDynamicSource |
KafkaDynamicTableFactory.createKafkaTableSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
String tableIdentifier) |
Constructor and Description |
---|
DecodingFormatWrapper(DecodingFormat<DeserializationSchema<RowData>> innerDecodingFormat) |
EncodingFormatWrapper(EncodingFormat<SerializationSchema<RowData>> innerEncodingFormat) |
KafkaDynamicSink(DataType consumedDataType,
DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
DeliveryGuarantee deliveryGuarantee,
boolean upsertMode,
SinkBufferFlushMode flushMode,
Integer parallelism,
String transactionalIdPrefix) |
KafkaDynamicSink(DataType consumedDataType,
DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
DeliveryGuarantee deliveryGuarantee,
boolean upsertMode,
SinkBufferFlushMode flushMode,
Integer parallelism,
String transactionalIdPrefix) |
KafkaDynamicSink(DataType consumedDataType,
DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
DeliveryGuarantee deliveryGuarantee,
boolean upsertMode,
SinkBufferFlushMode flushMode,
Integer parallelism,
String transactionalIdPrefix) |
KafkaDynamicSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
boolean upsertMode,
String tableIdentifier) |
KafkaDynamicSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
boolean upsertMode,
String tableIdentifier) |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataKinesisDeserializationSchema.deserialize(byte[] recordValue,
String partitionKey,
String seqNum,
long approxArrivalTimestamp,
String stream,
String shardId) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<RowData> |
RowDataKinesisDeserializationSchema.getProducedType() |
Constructor and Description |
---|
KinesisDynamicSource(DataType physicalDataType,
String stream,
Properties consumerProperties,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat) |
KinesisDynamicSource(DataType physicalDataType,
String stream,
Properties consumerProperties,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat,
DataType producedDataType,
List<RowDataKinesisDeserializationSchema.Metadata> requestedMetadataFields) |
RowDataKinesisDeserializationSchema(DeserializationSchema<RowData> physicalDeserializer,
TypeInformation<RowData> producedTypeInfo,
List<RowDataKinesisDeserializationSchema.Metadata> requestedMetadataFields) |
RowDataKinesisDeserializationSchema(DeserializationSchema<RowData> physicalDeserializer,
TypeInformation<RowData> producedTypeInfo,
List<RowDataKinesisDeserializationSchema.Metadata> requestedMetadataFields) |
Modifier and Type | Method and Description |
---|---|
CloseableIterator<RowData> |
TableResultImpl.collectInternal() |
CloseableIterator<RowData> |
TableResultInternal.collectInternal()
Returns an iterator that returns the iterator with the internal row data type.
|
CloseableIterator<RowData> |
ResultProvider.toInternalIterator()
Returns the select result as row iterator using internal data types.
|
Modifier and Type | Method and Description |
---|---|
BulkWriter.Factory<RowData> |
HiveShimV200.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
BulkWriter.Factory<RowData> |
HiveShimV100.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
BulkWriter.Factory<RowData> |
HiveShim.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes)
Create orc
BulkWriter.Factory for different hive versions. |
Modifier and Type | Method and Description |
---|---|
TypedResult<List<RowData>> |
Executor.retrieveResultChanges(String sessionId,
String resultId)
Asks for the next changelog results (non-blocking).
|
List<RowData> |
Executor.retrieveResultPage(String resultId,
int page)
Returns the rows that are part of the current page or throws an exception if the snapshot has
been expired.
|
Modifier and Type | Method and Description |
---|---|
TypedResult<List<RowData>> |
LocalExecutor.retrieveResultChanges(String sessionId,
String resultId) |
List<RowData> |
LocalExecutor.retrieveResultPage(String resultId,
int page) |
Modifier and Type | Field and Description |
---|---|
protected List<RowData> |
MaterializedCollectResultBase.materializedTable
Materialized table that is continuously updated by inserts and deletes.
|
Modifier and Type | Method and Description |
---|---|
protected List<RowData> |
MaterializedCollectResultBase.getMaterializedTable() |
TypedResult<List<RowData>> |
ChangelogCollectResult.retrieveChanges() |
TypedResult<List<RowData>> |
ChangelogResult.retrieveChanges()
Retrieves the available result records.
|
List<RowData> |
MaterializedResult.retrievePage(int page)
Retrieves a page of a snapshotted result.
|
List<RowData> |
MaterializedCollectResultBase.retrievePage(int page) |
Modifier and Type | Method and Description |
---|---|
protected void |
ChangelogCollectResult.processRecord(RowData row) |
protected void |
MaterializedCollectBatchResult.processRecord(RowData row) |
protected abstract void |
CollectResultBase.processRecord(RowData row) |
protected void |
MaterializedCollectStreamResult.processRecord(RowData row) |
Modifier and Type | Method and Description |
---|---|
OutputFormat<RowData> |
OutputFormatProvider.createOutputFormat()
Creates an
OutputFormat instance. |
Sink<RowData> |
SinkV2Provider.createSink() |
Sink<RowData,?,?,?> |
SinkProvider.createSink()
Deprecated.
Creates a
Sink instance. |
SinkFunction<RowData> |
SinkFunctionProvider.createSinkFunction()
Creates a
SinkFunction instance. |
Modifier and Type | Method and Description |
---|---|
default DataStreamSink<?> |
DataStreamSinkProvider.consumeDataStream(DataStream<RowData> dataStream)
Deprecated.
Use
DataStreamSinkProvider.consumeDataStream(ProviderContext, DataStream)
and correctly set a unique identifier for each data stream transformation. |
default DataStreamSink<?> |
DataStreamSinkProvider.consumeDataStream(ProviderContext providerContext,
DataStream<RowData> dataStream)
Consumes the given Java
DataStream and returns the sink transformation DataStreamSink . |
static OutputFormatProvider |
OutputFormatProvider.of(OutputFormat<RowData> outputFormat)
Helper method for creating a static provider.
|
static OutputFormatProvider |
OutputFormatProvider.of(OutputFormat<RowData> outputFormat,
Integer sinkParallelism)
Helper method for creating a static provider with a provided sink parallelism.
|
static SinkProvider |
SinkProvider.of(Sink<RowData,?,?,?> sink)
Deprecated.
Helper method for creating a static provider.
|
static SinkProvider |
SinkProvider.of(Sink<RowData,?,?,?> sink,
Integer sinkParallelism)
Deprecated.
Helper method for creating a Sink provider with a provided sink parallelism.
|
static SinkV2Provider |
SinkV2Provider.of(Sink<RowData> sink)
Helper method for creating a static provider.
|
static SinkV2Provider |
SinkV2Provider.of(Sink<RowData> sink,
Integer sinkParallelism)
Helper method for creating a Sink provider with a provided sink parallelism.
|
static SinkFunctionProvider |
SinkFunctionProvider.of(SinkFunction<RowData> sinkFunction)
Helper method for creating a static provider.
|
static SinkFunctionProvider |
SinkFunctionProvider.of(SinkFunction<RowData> sinkFunction,
Integer sinkParallelism)
Helper method for creating a SinkFunction provider with a provided sink parallelism.
|
Modifier and Type | Method and Description |
---|---|
InputFormat<RowData,?> |
InputFormatProvider.createInputFormat()
Creates an
InputFormat instance. |
Source<RowData,?,?> |
SourceProvider.createSource()
Creates a
Source instance. |
SourceFunction<RowData> |
SourceFunctionProvider.createSourceFunction()
Creates a
SourceFunction instance. |
default DataStream<RowData> |
DataStreamScanProvider.produceDataStream(ProviderContext providerContext,
StreamExecutionEnvironment execEnv)
Creates a scan Java
DataStream from a StreamExecutionEnvironment . |
default DataStream<RowData> |
DataStreamScanProvider.produceDataStream(StreamExecutionEnvironment execEnv)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
static InputFormatProvider |
InputFormatProvider.of(InputFormat<RowData,?> inputFormat)
Helper method for creating a static provider.
|
static SourceProvider |
SourceProvider.of(Source<RowData,?,?> source)
Helper method for creating a static provider.
|
static SourceFunctionProvider |
SourceFunctionProvider.of(SourceFunction<RowData> sourceFunction,
boolean isBounded)
Helper method for creating a static provider.
|
Modifier and Type | Method and Description |
---|---|
void |
SupportsWatermarkPushDown.applyWatermark(WatermarkStrategy<RowData> watermarkStrategy)
Provides a
WatermarkStrategy which defines how to generate Watermark s in the
stream source. |
Modifier and Type | Class and Description |
---|---|
class |
BoxedWrapperRowData
An implementation of
RowData which also is also backed by an array of Java Object , just similar to GenericRowData . |
class |
GenericRowData
An internal data structure representing data of
RowType and other (possibly nested)
structured types such as StructuredType . |
class |
UpdatableRowData
|
Modifier and Type | Method and Description |
---|---|
RowData |
UpdatableRowData.getRow() |
RowData |
BoxedWrapperRowData.getRow(int pos,
int numFields) |
RowData |
UpdatableRowData.getRow(int pos,
int numFields) |
RowData |
GenericRowData.getRow(int pos,
int numFields) |
RowData |
ArrayData.getRow(int pos,
int numFields)
Returns the row value at the given position.
|
RowData |
RowData.getRow(int pos,
int numFields)
Returns the row value at the given position.
|
RowData |
GenericArrayData.getRow(int pos,
int numFields) |
Modifier and Type | Method and Description |
---|---|
Object |
RowData.FieldGetter.getFieldOrNull(RowData row) |
Constructor and Description |
---|
UpdatableRowData(RowData row,
int arity) |
Modifier and Type | Class and Description |
---|---|
class |
BinaryRowData
An implementation of
RowData which is backed by MemorySegment instead of Object. |
class |
NestedRowData
Its memory storage structure is exactly the same with
BinaryRowData . |
Modifier and Type | Method and Description |
---|---|
RowData |
BinaryArrayData.getRow(int pos,
int numFields) |
RowData |
BinaryRowData.getRow(int pos,
int numFields) |
RowData |
NestedRowData.getRow(int pos,
int numFields) |
static RowData |
BinarySegmentUtils.readRowData(MemorySegment[] segments,
int numFields,
int baseOffset,
long offsetAndSize)
Gets an instance of
RowData from underlying MemorySegment . |
Modifier and Type | Method and Description |
---|---|
NestedRowData |
NestedRowData.copy(RowData reuse) |
Modifier and Type | Class and Description |
---|---|
class |
ColumnarRowData
Columnar row to support access to vector column data.
|
Modifier and Type | Method and Description |
---|---|
RowData |
ColumnarArrayData.getRow(int pos,
int numFields) |
RowData |
ColumnarRowData.getRow(int pos,
int numFields) |
Modifier and Type | Method and Description |
---|---|
RowData |
VectorizedColumnBatch.getRow(int rowId,
int colId) |
Modifier and Type | Method and Description |
---|---|
RowData |
RowRowConverter.toInternal(Row external) |
RowData |
StructuredObjectConverter.toInternal(T external) |
Modifier and Type | Method and Description |
---|---|
T |
StructuredObjectConverter.toExternal(RowData internal) |
Row |
RowRowConverter.toExternal(RowData internal) |
Modifier and Type | Method and Description |
---|---|
static boolean |
RowDataUtil.isAccumulateMsg(RowData row)
Returns true if the message is either
RowKind.INSERT or RowKind.UPDATE_AFTER ,
which refers to an accumulate operation of aggregation. |
static boolean |
RowDataUtil.isRetractMsg(RowData row)
Returns true if the message is either
RowKind.DELETE or RowKind.UPDATE_BEFORE , which refers to a retract operation of aggregation. |
External |
DataFormatConverters.DataFormatConverter.toExternal(RowData row,
int column)
Given a internalType row, convert the value at column `column` to its external(Java)
equivalent.
|
Modifier and Type | Class and Description |
---|---|
class |
JoinedRowData
|
class |
ProjectedRowData
|
Modifier and Type | Method and Description |
---|---|
RowData |
ProjectedRowData.getRow(int pos,
int numFields) |
RowData |
JoinedRowData.getRow(int pos,
int numFields) |
Modifier and Type | Method and Description |
---|---|
JoinedRowData |
JoinedRowData.replace(RowData row1,
RowData row2)
Replaces the
RowData backing this JoinedRowData . |
ProjectedRowData |
ProjectedRowData.replaceRow(RowData row)
Replaces the underlying
RowData backing this ProjectedRowData . |
Constructor and Description |
---|
JoinedRowData(RowData row1,
RowData row2)
Creates a new
JoinedRowData of kind RowKind.INSERT backed by
and . |
JoinedRowData(RowKind rowKind,
RowData row1,
RowData row2)
Creates a new
JoinedRowData of kind backed by and
. |
Modifier and Type | Method and Description |
---|---|
void |
BinaryWriter.writeRow(int pos,
RowData value,
RowDataSerializer serializer) |
Modifier and Type | Method and Description |
---|---|
RowData |
ChangelogCsvDeserializer.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
ChangelogCsvFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
ChangelogCsvFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType producedDataType) |
TypeInformation<RowData> |
SocketSourceFunction.getProducedType() |
TypeInformation<RowData> |
ChangelogCsvDeserializer.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
ChangelogCsvDeserializer.isEndOfStream(RowData nextElement) |
Modifier and Type | Method and Description |
---|---|
void |
SocketSourceFunction.run(SourceFunction.SourceContext<RowData> ctx) |
Constructor and Description |
---|
ChangelogCsvDeserializer(List<LogicalType> parsingTypes,
DynamicTableSource.DataStructureConverter converter,
TypeInformation<RowData> producedTypeInfo,
String columnDelimiter) |
SocketDynamicTableSource(String hostname,
int port,
byte byteDelimiter,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat,
DataType producedDataType) |
SocketSourceFunction(String hostname,
int port,
byte byteDelimiter,
DeserializationSchema<RowData> deserializer) |
Modifier and Type | Method and Description |
---|---|
RowData |
InternalRowMergerFunction.eval(RowData r1,
RowData r2) |
Modifier and Type | Method and Description |
---|---|
RowData |
InternalRowMergerFunction.eval(RowData r1,
RowData r2) |
Modifier and Type | Method and Description |
---|---|
Transformation<RowData> |
TransformationScanProvider.createTransformation(ProviderContext providerContext)
Creates a
Transformation instance. |
Transformation<RowData> |
TransformationSinkProvider.Context.getInputTransformation()
Input transformation to transform.
|
Modifier and Type | Method and Description |
---|---|
String[] |
RowDataToStringConverterImpl.convert(RowData rowData) |
Modifier and Type | Method and Description |
---|---|
protected Transformation<RowData> |
BatchExecLegacyTableSourceScan.createConversionTransformationIfNeeded(StreamExecutionEnvironment streamExecEnv,
ExecNodeConfig config,
Transformation<?> sourceTransform,
org.apache.calcite.rex.RexNode rowtimeExpression) |
Transformation<RowData> |
BatchExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName) |
protected Transformation<RowData> |
BatchExecPythonGroupWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecBoundedStreamScan.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecNestedLoopJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecMultipleInput.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecRank.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecSortAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecHashAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecLimit.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecSort.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecHashWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecExchange.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecSortMergeJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecLegacyTableSourceScan.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecOverAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecTableSourceScan.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecSortLimit.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecPythonOverAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecSortWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecValues.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecWindowTableFunction.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecPythonGroupAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
BatchExecHashJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
Modifier and Type | Method and Description |
---|---|
Transformation<RowData> |
BatchExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName) |
Transformation<RowData> |
BatchExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName) |
Modifier and Type | Method and Description |
---|---|
protected abstract Transformation<RowData> |
CommonExecLegacyTableSourceScan.createConversionTransformationIfNeeded(StreamExecutionEnvironment streamExecEnv,
ExecNodeConfig config,
Transformation<?> sourceTransform,
org.apache.calcite.rex.RexNode rowtimeExpression) |
protected abstract Transformation<RowData> |
CommonExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName)
Creates a
Transformation based on the given InputFormat . |
protected Transformation<RowData> |
CommonExecTableSourceScan.createSourceFunctionTransformation(StreamExecutionEnvironment env,
SourceFunction<RowData> function,
boolean isBounded,
String operatorName,
TypeInformation<RowData> outputTypeInfo)
Adopted from
StreamExecutionEnvironment.addSource(SourceFunction, String,
TypeInformation) but with custom Boundedness . |
protected Transformation<RowData> |
CommonExecExpand.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecWindowTableFunction.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecValues.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecCalc.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecCorrelate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecPythonCorrelate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecTableSourceScan.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecPythonCalc.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecUnion.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
Transformation<RowData> |
CommonExecLookupJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
CommonExecLegacyTableSourceScan.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
Modifier and Type | Method and Description |
---|---|
protected Transformation<RowData> |
StreamExecLegacyTableSourceScan.createConversionTransformationIfNeeded(StreamExecutionEnvironment streamExecEnv,
ExecNodeConfig config,
Transformation<?> sourceTransform,
org.apache.calcite.rex.RexNode rowtimeExpression) |
Transformation<RowData> |
StreamExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName) |
static Tuple2<Pattern<RowData,RowData>,List<String>> |
StreamExecMatch.translatePattern(MatchSpec matchSpec,
TableConfig tableConfig,
org.apache.calcite.tools.RelBuilder relBuilder,
RowType inputRowType) |
static Tuple2<Pattern<RowData,RowData>,List<String>> |
StreamExecMatch.translatePattern(MatchSpec matchSpec,
TableConfig tableConfig,
org.apache.calcite.tools.RelBuilder relBuilder,
RowType inputRowType) |
protected Transformation<RowData> |
StreamExecWindowRank.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecMiniBatchAssigner.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecGlobalGroupAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecPythonOverAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecExchange.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecGroupAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecWindowDeduplicate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecTemporalJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecDropUpdateBefore.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecPythonGroupTableAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecOverAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecMatch.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecGroupWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecGlobalWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecLocalGroupAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecWatermarkAssigner.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecIncrementalGroupAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecWindowJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecSort.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecPythonGroupWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecMultipleInput.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecDataStreamScan.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecRank.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecTemporalSort.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecIntervalJoin.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecSortLimit.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecGroupTableAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecPythonGroupAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecLocalWindowAggregate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecLimit.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecChangelogNormalize.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
protected Transformation<RowData> |
StreamExecDeduplicate.translateToPlanInternal(org.apache.flink.table.planner.delegation.PlannerBase planner,
ExecNodeConfig config) |
Modifier and Type | Method and Description |
---|---|
Transformation<RowData> |
StreamExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName) |
Transformation<RowData> |
StreamExecTableSourceScan.createInputFormatTransformation(StreamExecutionEnvironment env,
InputFormat<RowData,?> inputFormat,
InternalTypeInfo<RowData> outputTypeInfo,
String operatorName) |
Modifier and Type | Method and Description |
---|---|
static RowDataKeySelector |
KeySelectorUtil.getRowDataSelector(int[] keyFields,
InternalTypeInfo<RowData> rowType)
Create a RowDataKeySelector to extract keys from DataStream which type is
InternalTypeInfo of RowData . |
Modifier and Type | Method and Description |
---|---|
RowData |
ArrowReader.read(int rowId)
Read the specified row from underlying Arrow format data.
|
Modifier and Type | Method and Description |
---|---|
static ArrowWriter<RowData> |
ArrowUtils.createRowDataArrowWriter(org.apache.arrow.vector.VectorSchemaRoot root,
RowType rowType)
Creates an
ArrowWriter for the specified VectorSchemaRoot . |
Modifier and Type | Method and Description |
---|---|
RowData |
ArrowSerializer.read(int i) |
Modifier and Type | Method and Description |
---|---|
ArrowWriter<RowData> |
ArrowSerializer.createArrowWriter()
Creates an
ArrowWriter . |
Modifier and Type | Method and Description |
---|---|
void |
ArrowSerializer.write(RowData element) |
Modifier and Type | Method and Description |
---|---|
DataStream<RowData> |
ArrowTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
TypeInformation<RowData> |
ArrowSourceFunction.getProducedType() |
Modifier and Type | Method and Description |
---|---|
void |
ArrowSourceFunction.run(SourceFunction.SourceContext<RowData> ctx) |
Modifier and Type | Method and Description |
---|---|
static BigIntWriter<RowData> |
BigIntWriter.forRow(org.apache.arrow.vector.BigIntVector bigIntVector) |
static BooleanWriter<RowData> |
BooleanWriter.forRow(org.apache.arrow.vector.BitVector bitVector) |
static DateWriter<RowData> |
DateWriter.forRow(org.apache.arrow.vector.DateDayVector dateDayVector) |
static DecimalWriter<RowData> |
DecimalWriter.forRow(org.apache.arrow.vector.DecimalVector decimalVector,
int precision,
int scale) |
static FloatWriter<RowData> |
FloatWriter.forRow(org.apache.arrow.vector.Float4Vector floatVector) |
static DoubleWriter<RowData> |
DoubleWriter.forRow(org.apache.arrow.vector.Float8Vector doubleVector) |
static IntWriter<RowData> |
IntWriter.forRow(org.apache.arrow.vector.IntVector intVector) |
static ArrayWriter<RowData> |
ArrayWriter.forRow(org.apache.arrow.vector.complex.ListVector listVector,
ArrowFieldWriter<ArrayData> elementWriter) |
static SmallIntWriter<RowData> |
SmallIntWriter.forRow(org.apache.arrow.vector.SmallIntVector intVector) |
static RowWriter<RowData> |
RowWriter.forRow(org.apache.arrow.vector.complex.StructVector structVector,
ArrowFieldWriter<RowData>[] fieldsWriters) |
static TinyIntWriter<RowData> |
TinyIntWriter.forRow(org.apache.arrow.vector.TinyIntVector tinyIntVector) |
static TimeWriter<RowData> |
TimeWriter.forRow(org.apache.arrow.vector.ValueVector valueVector) |
static TimestampWriter<RowData> |
TimestampWriter.forRow(org.apache.arrow.vector.ValueVector valueVector,
int precision) |
static VarBinaryWriter<RowData> |
VarBinaryWriter.forRow(org.apache.arrow.vector.VarBinaryVector varBinaryVector) |
static VarCharWriter<RowData> |
VarCharWriter.forRow(org.apache.arrow.vector.VarCharVector varCharVector) |
Modifier and Type | Method and Description |
---|---|
RowData |
ExecutionContextImpl.currentKey() |
RowData |
ExecutionContext.currentKey() |
Modifier and Type | Method and Description |
---|---|
void |
ExecutionContextImpl.setCurrentKey(RowData key) |
void |
ExecutionContext.setCurrentKey(RowData key)
Sets current key.
|
Modifier and Type | Method and Description |
---|---|
RowData |
FirstValueAggFunction.createAccumulator() |
RowData |
LastValueAggFunction.createAccumulator() |
Modifier and Type | Method and Description |
---|---|
void |
FirstValueAggFunction.accumulate(RowData rowData,
Object value) |
void |
LastValueAggFunction.accumulate(RowData rowData,
Object value) |
void |
FirstValueAggFunction.accumulate(RowData rowData,
Object value,
Long order) |
void |
LastValueAggFunction.accumulate(RowData rowData,
Object value,
Long order) |
void |
FirstValueAggFunction.accumulate(RowData rowData,
StringData value) |
void |
FirstValueAggFunction.accumulate(RowData rowData,
StringData value,
Long order) |
T |
FirstValueAggFunction.getValue(RowData acc) |
T |
LastValueAggFunction.getValue(RowData rowData) |
void |
FirstValueAggFunction.resetAccumulator(RowData rowData) |
void |
LastValueAggFunction.resetAccumulator(RowData rowData) |
Modifier and Type | Interface and Description |
---|---|
interface |
Projection<IN extends RowData,OUT extends RowData>
Interface for code generated projection, which will map a RowData to another one.
|
interface |
Projection<IN extends RowData,OUT extends RowData>
Interface for code generated projection, which will map a RowData to another one.
|
Modifier and Type | Method and Description |
---|---|
RowData |
NamespaceAggsHandleFunctionBase.createAccumulators()
Initializes the accumulators and save them to a accumulators row.
|
RowData |
AggsHandleFunctionBase.createAccumulators()
Initializes the accumulators and save them to a accumulators row.
|
RowData |
NamespaceAggsHandleFunctionBase.getAccumulators()
Gets the current accumulators (saved in a row) which contains the current aggregated results.
|
RowData |
AggsHandleFunctionBase.getAccumulators()
Gets the current accumulators (saved in a row) which contains the current aggregated results.
|
RowData |
AggsHandleFunction.getValue()
Gets the result of the aggregation from the current accumulators.
|
RowData |
NamespaceAggsHandleFunction.getValue(N namespace)
Gets the result of the aggregation from the current accumulators and namespace properties
(like window start).
|
Modifier and Type | Method and Description |
---|---|
WatermarkGenerator<RowData> |
GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(WatermarkGeneratorSupplier.Context context) |
Modifier and Type | Method and Description |
---|---|
void |
NamespaceAggsHandleFunctionBase.accumulate(RowData inputRow)
Accumulates the input values to the accumulators.
|
void |
AggsHandleFunctionBase.accumulate(RowData input)
Accumulates the input values to the accumulators.
|
boolean |
JoinCondition.apply(RowData in1,
RowData in2) |
int |
RecordComparator.compare(RowData o1,
RowData o2) |
abstract Long |
WatermarkGenerator.currentWatermark(RowData row)
Returns the watermark for the current row or null if no watermark should be generated.
|
void |
TableAggsHandleFunction.emitValue(Collector<RowData> out,
RowData currentKey,
boolean isRetract)
Emit the result of the table aggregation through the collector.
|
void |
NamespaceTableAggsHandleFunction.emitValue(N namespace,
RowData key,
Collector<RowData> out)
Emits the result of the aggregation from the current accumulators and namespace properties
(like window start).
|
boolean |
RecordEqualiser.equals(RowData row1,
RowData row2)
Returns
true if the rows are equal to each other and false otherwise. |
int |
HashFunction.hashCode(RowData row) |
void |
NamespaceAggsHandleFunctionBase.merge(N namespace,
RowData otherAcc)
Merges the other accumulators into current accumulators.
|
void |
AggsHandleFunctionBase.merge(RowData accumulators)
Merges the other accumulators into current accumulators.
|
void |
GeneratedWatermarkGeneratorSupplier.DefaultWatermarkGenerator.onEvent(RowData event,
long eventTimestamp,
WatermarkOutput output) |
void |
NormalizedKeyComputer.putKey(RowData record,
MemorySegment target,
int offset)
Writes a normalized key for the given record into the target
MemorySegment . |
void |
NamespaceAggsHandleFunctionBase.retract(RowData inputRow)
Retracts the input values from the accumulators.
|
void |
AggsHandleFunctionBase.retract(RowData input)
Retracts the input values from the accumulators.
|
void |
NamespaceAggsHandleFunctionBase.setAccumulators(N namespace,
RowData accumulators)
Set the current accumulators (saved in a row) which contains the current aggregated results.
|
void |
AggsHandleFunctionBase.setAccumulators(RowData accumulators)
Set the current accumulators (saved in a row) which contains the current aggregated results.
|
Modifier and Type | Method and Description |
---|---|
void |
TableAggsHandleFunction.emitValue(Collector<RowData> out,
RowData currentKey,
boolean isRetract)
Emit the result of the table aggregation through the collector.
|
void |
NamespaceTableAggsHandleFunction.emitValue(N namespace,
RowData key,
Collector<RowData> out)
Emits the result of the aggregation from the current accumulators and namespace properties
(like window start).
|
Modifier and Type | Class and Description |
---|---|
class |
WrappedRowIterator<T extends RowData>
Wrap
MutableObjectIterator to java RowIterator . |
Modifier and Type | Method and Description |
---|---|
RowData |
ProbeIterator.current() |
RowData |
LongHybridHashTable.getCurrentProbeRow() |
RowData |
BinaryHashTable.getCurrentProbeRow() |
Modifier and Type | Method and Description |
---|---|
abstract long |
LongHybridHashTable.getBuildLongKey(RowData row)
For code gen get build side long key.
|
abstract long |
LongHybridHashTable.getProbeLongKey(RowData row)
For code gen get probe side long key.
|
abstract BinaryRowData |
LongHybridHashTable.probeToBinary(RowData row)
For code gen probe side to BinaryRowData.
|
void |
BinaryHashTable.putBuildRow(RowData row)
Put a build side row to hash table.
|
void |
ProbeIterator.setInstance(RowData instance) |
boolean |
LongHybridHashTable.tryProbe(RowData record) |
boolean |
BinaryHashTable.tryProbe(RowData record)
Find matched build side rows for a probe row.
|
Constructor and Description |
---|
BinaryHashTable(Configuration conf,
Object owner,
AbstractRowDataSerializer buildSideSerializer,
AbstractRowDataSerializer probeSideSerializer,
Projection<RowData,BinaryRowData> buildSideProjection,
Projection<RowData,BinaryRowData> probeSideProjection,
MemoryManager memManager,
long reservedMemorySize,
IOManager ioManager,
int avgRecordLen,
long buildRowCount,
boolean useBloomFilters,
HashJoinType type,
JoinCondition condFunc,
boolean reverseJoin,
boolean[] filterNulls,
boolean tryDistinctBuildRow) |
BinaryHashTable(Configuration conf,
Object owner,
AbstractRowDataSerializer buildSideSerializer,
AbstractRowDataSerializer probeSideSerializer,
Projection<RowData,BinaryRowData> buildSideProjection,
Projection<RowData,BinaryRowData> probeSideProjection,
MemoryManager memManager,
long reservedMemorySize,
IOManager ioManager,
int avgRecordLen,
long buildRowCount,
boolean useBloomFilters,
HashJoinType type,
JoinCondition condFunc,
boolean reverseJoin,
boolean[] filterNulls,
boolean tryDistinctBuildRow) |
Modifier and Type | Method and Description |
---|---|
RowData |
BinaryRowDataKeySelector.getKey(RowData value) |
RowData |
EmptyRowDataKeySelector.getKey(RowData value) |
Modifier and Type | Method and Description |
---|---|
InternalTypeInfo<RowData> |
BinaryRowDataKeySelector.getProducedType() |
InternalTypeInfo<RowData> |
RowDataKeySelector.getProducedType() |
InternalTypeInfo<RowData> |
EmptyRowDataKeySelector.getProducedType() |
Modifier and Type | Method and Description |
---|---|
RowData |
BinaryRowDataKeySelector.getKey(RowData value) |
RowData |
EmptyRowDataKeySelector.getKey(RowData value) |
Constructor and Description |
---|
BinaryRowDataKeySelector(InternalTypeInfo<RowData> keyRowType,
GeneratedProjection generatedProjection) |
Modifier and Type | Method and Description |
---|---|
RowData |
MiniBatchGlobalGroupAggFunction.addInput(RowData previousAcc,
RowData input)
The
previousAcc is accumulator, but input is a row in <key, accumulator>
schema, the specific generated MiniBatchGlobalGroupAggFunction.localAgg will project the input to
accumulator in merge method. |
RowData |
MiniBatchLocalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
RowData |
MiniBatchIncrementalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
MiniBatchGroupAggFunction.addInput(List<RowData> value,
RowData input) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
MiniBatchGroupAggFunction.addInput(List<RowData> value,
RowData input) |
RowData |
MiniBatchGlobalGroupAggFunction.addInput(RowData previousAcc,
RowData input)
The
previousAcc is accumulator, but input is a row in <key, accumulator>
schema, the specific generated MiniBatchGlobalGroupAggFunction.localAgg will project the input to
accumulator in merge method. |
RowData |
MiniBatchLocalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
RowData |
MiniBatchIncrementalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
void |
GroupAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
GroupTableAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
abstract boolean |
RecordCounter.recordCountIsZero(RowData acc)
We store the counter in the accumulator.
|
Modifier and Type | Method and Description |
---|---|
List<RowData> |
MiniBatchGroupAggFunction.addInput(List<RowData> value,
RowData input) |
void |
MiniBatchGroupAggFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
MiniBatchGroupAggFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
MiniBatchGroupAggFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
MiniBatchGlobalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchGlobalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchGlobalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchLocalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchLocalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchLocalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchIncrementalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchIncrementalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchIncrementalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
GroupAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
GroupTableAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Constructor and Description |
---|
MiniBatchIncrementalGroupAggFunction(GeneratedAggsHandleFunction genPartialAggsHandler,
GeneratedAggsHandleFunction genFinalAggsHandler,
KeySelector<RowData,RowData> finalKeySelector,
long stateRetentionTime) |
MiniBatchIncrementalGroupAggFunction(GeneratedAggsHandleFunction genPartialAggsHandler,
GeneratedAggsHandleFunction genFinalAggsHandler,
KeySelector<RowData,RowData> finalKeySelector,
long stateRetentionTime) |
Modifier and Type | Field and Description |
---|---|
protected TimestampedCollector<RowData> |
LocalSlicingWindowAggOperator.collector
This is used for emitting elements with a given timestamp.
|
Modifier and Type | Method and Description |
---|---|
SlicingWindowOperator<RowData,?> |
SlicingWindowAggOperatorBuilder.build() |
Modifier and Type | Method and Description |
---|---|
void |
RecordsWindowBuffer.addElement(RowData key,
long sliceEnd,
RowData element) |
void |
WindowBuffer.addElement(RowData key,
long window,
RowData element)
Adds an element with associated key into the buffer.
|
Modifier and Type | Method and Description |
---|---|
WindowBuffer |
RecordsWindowBuffer.LocalFactory.create(Object operatorOwner,
MemoryManager memoryManager,
long memorySize,
RuntimeContext runtimeContext,
Collector<RowData> collector,
java.time.ZoneId shiftTimeZone) |
WindowBuffer |
WindowBuffer.LocalFactory.create(Object operatorOwner,
MemoryManager memoryManager,
long memorySize,
RuntimeContext runtimeContext,
Collector<RowData> collector,
java.time.ZoneId shiftTimeZone)
Creates a
WindowBuffer for local window that buffers elements in memory before
flushing. |
WindowBuffer |
RecordsWindowBuffer.Factory.create(Object operatorOwner,
MemoryManager memoryManager,
long memorySize,
RuntimeContext runtimeContext,
WindowTimerService<Long> timerService,
KeyedStateBackend<RowData> stateBackend,
WindowState<Long> windowState,
boolean isEventTime,
java.time.ZoneId shiftTimeZone) |
WindowBuffer |
WindowBuffer.Factory.create(Object operatorOwner,
MemoryManager memoryManager,
long memorySize,
RuntimeContext runtimeContext,
WindowTimerService<Long> timerService,
KeyedStateBackend<RowData> stateBackend,
WindowState<Long> windowState,
boolean isEventTime,
java.time.ZoneId shiftTimeZone)
Creates a
WindowBuffer that buffers elements in memory before flushing. |
Modifier and Type | Method and Description |
---|---|
void |
GlobalAggCombiner.combine(WindowKey windowKey,
Iterator<RowData> localAccs) |
void |
AggCombiner.combine(WindowKey windowKey,
Iterator<RowData> records) |
void |
LocalAggCombiner.combine(WindowKey windowKey,
Iterator<RowData> records) |
RecordsCombiner |
LocalAggCombiner.Factory.createRecordsCombiner(RuntimeContext runtimeContext,
Collector<RowData> collector) |
RecordsCombiner |
GlobalAggCombiner.Factory.createRecordsCombiner(RuntimeContext runtimeContext,
WindowTimerService<Long> timerService,
KeyedStateBackend<RowData> stateBackend,
WindowState<Long> windowState,
boolean isEventTime) |
RecordsCombiner |
AggCombiner.Factory.createRecordsCombiner(RuntimeContext runtimeContext,
WindowTimerService<Long> timerService,
KeyedStateBackend<RowData> stateBackend,
WindowState<Long> windowState,
boolean isEventTime) |
Constructor and Description |
---|
LocalAggCombiner(NamespaceAggsHandleFunction<Long> aggregator,
Collector<RowData> collector) |
Modifier and Type | Field and Description |
---|---|
protected TypeSerializer<RowData> |
AbstractWindowAggProcessor.accSerializer |
Modifier and Type | Method and Description |
---|---|
protected void |
AbstractWindowAggProcessor.collect(RowData aggResult) |
boolean |
AbstractWindowAggProcessor.processElement(RowData key,
RowData element) |
Constructor and Description |
---|
AbstractWindowAggProcessor(GeneratedNamespaceAggsHandleFunction<Long> genAggsHandler,
WindowBuffer.Factory bufferFactory,
SliceAssigner sliceAssigner,
TypeSerializer<RowData> accSerializer,
java.time.ZoneId shiftTimeZone) |
SliceSharedWindowAggProcessor(GeneratedNamespaceAggsHandleFunction<Long> genAggsHandler,
WindowBuffer.Factory bufferFactory,
SliceSharedAssigner sliceAssigner,
TypeSerializer<RowData> accSerializer,
int indexOfCountStar,
java.time.ZoneId shiftTimeZone) |
SliceUnsharedWindowAggProcessor(GeneratedNamespaceAggsHandleFunction<Long> genAggsHandler,
WindowBuffer.Factory windowBufferFactory,
SliceUnsharedAssigner sliceAssigner,
TypeSerializer<RowData> accSerializer,
java.time.ZoneId shiftTimeZone) |
Modifier and Type | Method and Description |
---|---|
RowData |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.addInput(RowData value,
RowData input) |
RowData |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.addInput(RowData value,
RowData input) |
RowData |
RowTimeMiniBatchLatestChangeDeduplicateFunction.addInput(RowData value,
RowData input) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
RowTimeMiniBatchDeduplicateFunction.addInput(List<RowData> value,
RowData input) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
RowTimeMiniBatchDeduplicateFunction.addInput(List<RowData> value,
RowData input) |
RowData |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.addInput(RowData value,
RowData input) |
RowData |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.addInput(RowData value,
RowData input) |
RowData |
RowTimeMiniBatchLatestChangeDeduplicateFunction.addInput(RowData value,
RowData input) |
static void |
RowTimeDeduplicateFunction.deduplicateOnRowTime(ValueState<RowData> state,
RowData currentRow,
Collector<RowData> out,
boolean generateUpdateBefore,
boolean generateInsert,
int rowtimeIndex,
boolean keepLastRow)
Processes element to deduplicate on keys with row time semantic, sends current element if it
is last or first row, retracts previous element if needed.
|
static boolean |
DeduplicateFunctionHelper.isDuplicate(RowData preRow,
RowData currentRow,
int rowtimeIndex,
boolean keepLastRow)
Returns current row is duplicate row or not compared to previous row.
|
void |
ProcTimeDeduplicateKeepLastRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeDeduplicateKeepFirstRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
RowTimeDeduplicateFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
RowTimeMiniBatchDeduplicateFunction.addInput(List<RowData> value,
RowData input) |
static void |
RowTimeDeduplicateFunction.deduplicateOnRowTime(ValueState<RowData> state,
RowData currentRow,
Collector<RowData> out,
boolean generateUpdateBefore,
boolean generateInsert,
int rowtimeIndex,
boolean keepLastRow)
Processes element to deduplicate on keys with row time semantic, sends current element if it
is last or first row, retracts previous element if needed.
|
static void |
RowTimeDeduplicateFunction.deduplicateOnRowTime(ValueState<RowData> state,
RowData currentRow,
Collector<RowData> out,
boolean generateUpdateBefore,
boolean generateInsert,
int rowtimeIndex,
boolean keepLastRow)
Processes element to deduplicate on keys with row time semantic, sends current element if it
is last or first row, retracts previous element if needed.
|
void |
RowTimeMiniBatchDeduplicateFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
RowTimeMiniBatchDeduplicateFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
RowTimeMiniBatchDeduplicateFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
RowTimeMiniBatchLatestChangeDeduplicateFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
RowTimeMiniBatchLatestChangeDeduplicateFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
RowTimeMiniBatchLatestChangeDeduplicateFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeDeduplicateKeepLastRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeDeduplicateKeepFirstRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
RowTimeDeduplicateFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Constructor and Description |
---|
ProcTimeDeduplicateKeepLastRowFunction(InternalTypeInfo<RowData> typeInfo,
long stateRetentionTime,
boolean generateUpdateBefore,
boolean generateInsert,
boolean inputInsertOnly,
GeneratedRecordEqualiser genRecordEqualiser) |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction(TypeSerializer<RowData> serializer,
long stateRetentionTime) |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long stateRetentionTime,
boolean generateUpdateBefore,
boolean generateInsert,
boolean inputInsertOnly,
GeneratedRecordEqualiser genRecordEqualiser) |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long stateRetentionTime,
boolean generateUpdateBefore,
boolean generateInsert,
boolean inputInsertOnly,
GeneratedRecordEqualiser genRecordEqualiser) |
RowTimeDeduplicateFunction(InternalTypeInfo<RowData> typeInfo,
long minRetentionTime,
int rowtimeIndex,
boolean generateUpdateBefore,
boolean generateInsert,
boolean keepLastRow) |
RowTimeMiniBatchDeduplicateFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long minRetentionTime,
int rowtimeIndex,
boolean generateUpdateBefore,
boolean generateInsert,
boolean keepLastRow) |
RowTimeMiniBatchDeduplicateFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long minRetentionTime,
int rowtimeIndex,
boolean generateUpdateBefore,
boolean generateInsert,
boolean keepLastRow) |
RowTimeMiniBatchLatestChangeDeduplicateFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long minRetentionTime,
int rowtimeIndex,
boolean generateUpdateBefore,
boolean generateInsert,
boolean keepLastRow) |
RowTimeMiniBatchLatestChangeDeduplicateFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long minRetentionTime,
int rowtimeIndex,
boolean generateUpdateBefore,
boolean generateInsert,
boolean keepLastRow) |
Modifier and Type | Method and Description |
---|---|
SlicingWindowOperator<RowData,?> |
RowTimeWindowDeduplicateOperatorBuilder.build() |
Modifier and Type | Method and Description |
---|---|
RowTimeWindowDeduplicateOperatorBuilder |
RowTimeWindowDeduplicateOperatorBuilder.inputSerializer(AbstractRowDataSerializer<RowData> inputSerializer) |
RowTimeWindowDeduplicateOperatorBuilder |
RowTimeWindowDeduplicateOperatorBuilder.keySerializer(PagedTypeSerializer<RowData> keySerializer) |
Modifier and Type | Method and Description |
---|---|
void |
RowTimeDeduplicateRecordsCombiner.combine(WindowKey windowKey,
Iterator<RowData> records) |
RecordsCombiner |
RowTimeDeduplicateRecordsCombiner.Factory.createRecordsCombiner(RuntimeContext runtimeContext,
WindowTimerService<Long> timerService,
KeyedStateBackend<RowData> stateBackend,
WindowState<Long> windowState,
boolean isEventTime) |
Constructor and Description |
---|
Factory(TypeSerializer<RowData> recordSerializer,
int rowtimeIndex,
boolean keepLastRow) |
RowTimeDeduplicateRecordsCombiner(WindowTimerService<Long> timerService,
StateKeyContext keyContext,
WindowValueState<Long> dataState,
int rowtimeIndex,
boolean keepLastRow,
TypeSerializer<RowData> recordSerializer) |
Modifier and Type | Method and Description |
---|---|
boolean |
RowTimeWindowDeduplicateProcessor.processElement(RowData key,
RowData element) |
Constructor and Description |
---|
RowTimeWindowDeduplicateProcessor(TypeSerializer<RowData> inputSerializer,
WindowBuffer.Factory bufferFactory,
int windowEndIndex,
java.time.ZoneId shiftTimeZone) |
Modifier and Type | Method and Description |
---|---|
RowData |
SortMergeJoinIterator.getProbeRow() |
RowData |
OuterJoinPaddingUtil.padLeft(RowData leftRow)
Returns a padding result with the given left row.
|
RowData |
OuterJoinPaddingUtil.padRight(RowData rightRow)
Returns a padding result with the given right row.
|
Modifier and Type | Method and Description |
---|---|
boolean |
JoinConditionWithNullFilters.apply(RowData left,
RowData right) |
abstract void |
HashJoinOperator.join(RowIterator<BinaryRowData> buildIter,
RowData probeRow) |
RowData |
OuterJoinPaddingUtil.padLeft(RowData leftRow)
Returns a padding result with the given left row.
|
RowData |
OuterJoinPaddingUtil.padRight(RowData rightRow)
Returns a padding result with the given right row.
|
Modifier and Type | Method and Description |
---|---|
void |
SortMergeJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
HashJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
SortMergeJoinOperator.processElement2(StreamRecord<RowData> element) |
void |
HashJoinOperator.processElement2(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
RowData |
PaddingRightMapFunction.map(RowData value) |
RowData |
PaddingLeftMapFunction.map(RowData value) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<RowData> |
IntervalJoinFunction.getProducedType() |
TypeInformation<RowData> |
PaddingRightMapFunction.getProducedType() |
TypeInformation<RowData> |
PaddingLeftMapFunction.getProducedType() |
TypeInformation<RowData> |
FilterAllFlatMapFunction.getProducedType() |
Modifier and Type | Method and Description |
---|---|
void |
FilterAllFlatMapFunction.flatMap(RowData value,
Collector<RowData> out) |
void |
IntervalJoinFunction.join(RowData first,
RowData second,
Collector<RowData> out) |
RowData |
PaddingRightMapFunction.map(RowData value) |
RowData |
PaddingLeftMapFunction.map(RowData value) |
void |
IntervalJoinFunction.setJoinKey(RowData currentKey) |
Modifier and Type | Method and Description |
---|---|
void |
FilterAllFlatMapFunction.flatMap(RowData value,
Collector<RowData> out) |
void |
IntervalJoinFunction.join(RowData first,
RowData second,
Collector<RowData> out) |
Constructor and Description |
---|
FilterAllFlatMapFunction(InternalTypeInfo<RowData> inputTypeInfo) |
IntervalJoinFunction(GeneratedJoinCondition joinCondition,
InternalTypeInfo<RowData> returnTypeInfo,
boolean[] filterNulls) |
PaddingLeftMapFunction(OuterJoinPaddingUtil paddingUtil,
InternalTypeInfo<RowData> returnType) |
PaddingRightMapFunction(OuterJoinPaddingUtil paddingUtil,
InternalTypeInfo<RowData> returnType) |
ProcTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
IntervalJoinFunction genJoinFunc) |
ProcTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
IntervalJoinFunction genJoinFunc) |
RowTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
long allowedLateness,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
IntervalJoinFunction joinFunc,
int leftTimeIdx,
int rightTimeIdx) |
RowTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
long allowedLateness,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
IntervalJoinFunction joinFunc,
int leftTimeIdx,
int rightTimeIdx) |
Modifier and Type | Field and Description |
---|---|
protected TableFunctionCollector<RowData> |
LookupJoinRunner.collector |
Modifier and Type | Method and Description |
---|---|
TableFunctionResultFuture<RowData> |
AsyncLookupJoinRunner.createFetcherResultFuture(Configuration parameters) |
TableFunctionResultFuture<RowData> |
AsyncLookupJoinWithCalcRunner.createFetcherResultFuture(Configuration parameters) |
Collector<RowData> |
LookupJoinWithCalcRunner.getFetcherCollector() |
Collector<RowData> |
LookupJoinRunner.getFetcherCollector() |
Modifier and Type | Method and Description |
---|---|
void |
AsyncLookupJoinRunner.asyncInvoke(RowData input,
ResultFuture<RowData> resultFuture) |
void |
LookupJoinRunner.processElement(RowData in,
ProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
void |
AsyncLookupJoinRunner.asyncInvoke(RowData input,
ResultFuture<RowData> resultFuture) |
void |
LookupJoinRunner.processElement(RowData in,
ProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Field and Description |
---|---|
RowData |
AbstractStreamingJoinOperator.OuterRecord.record |
Modifier and Type | Field and Description |
---|---|
protected TimestampedCollector<RowData> |
AbstractStreamingJoinOperator.collector |
protected InternalTypeInfo<RowData> |
AbstractStreamingJoinOperator.leftType |
protected InternalTypeInfo<RowData> |
AbstractStreamingJoinOperator.rightType |
Modifier and Type | Method and Description |
---|---|
Iterable<RowData> |
AbstractStreamingJoinOperator.AssociatedRecords.getRecords()
Gets the iterable of records.
|
Modifier and Type | Method and Description |
---|---|
static AbstractStreamingJoinOperator.AssociatedRecords |
AbstractStreamingJoinOperator.AssociatedRecords.of(RowData input,
boolean inputIsLeft,
JoinRecordStateView otherSideStateView,
JoinCondition condition)
Creates an
AbstractStreamingJoinOperator.AssociatedRecords which represents the records associated to the input
row. |
Modifier and Type | Method and Description |
---|---|
void |
StreamingJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
StreamingSemiAntiJoinOperator.processElement1(StreamRecord<RowData> element)
Process an input element and output incremental joined records, retraction messages will be
sent in some scenarios.
|
void |
StreamingJoinOperator.processElement2(StreamRecord<RowData> element) |
void |
StreamingSemiAntiJoinOperator.processElement2(StreamRecord<RowData> element)
Process an input element and output incremental joined records, retraction messages will be
sent in some scenarios.
|
Constructor and Description |
---|
AbstractStreamingJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean[] filterNullKeys,
long stateRetentionTime) |
AbstractStreamingJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean[] filterNullKeys,
long stateRetentionTime) |
StreamingJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean leftIsOuter,
boolean rightIsOuter,
boolean[] filterNullKeys,
long stateRetentionTime) |
StreamingJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean leftIsOuter,
boolean rightIsOuter,
boolean[] filterNullKeys,
long stateRetentionTime) |
StreamingSemiAntiJoinOperator(boolean isAntiJoin,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean[] filterNullKeys,
long stateRetentionTime) |
StreamingSemiAntiJoinOperator(boolean isAntiJoin,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean[] filterNullKeys,
long stateRetentionTime) |
Modifier and Type | Method and Description |
---|---|
Iterable<RowData> |
JoinRecordStateView.getRecords()
Gets all the records under the current context (i.e.
|
Iterable<Tuple2<RowData,Integer>> |
OuterJoinRecordStateView.getRecordsAndNumOfAssociations()
Gets all the records and number of associations under the current context (i.e.
|
KeySelector<RowData,RowData> |
JoinInputSideSpec.getUniqueKeySelector()
Returns the
KeySelector to extract unique key from the input row. |
KeySelector<RowData,RowData> |
JoinInputSideSpec.getUniqueKeySelector()
Returns the
KeySelector to extract unique key from the input row. |
InternalTypeInfo<RowData> |
JoinInputSideSpec.getUniqueKeyType()
Returns the
TypeInformation of the unique key. |
Modifier and Type | Method and Description |
---|---|
void |
JoinRecordStateView.addRecord(RowData record)
Add a new record to the state view.
|
void |
OuterJoinRecordStateView.addRecord(RowData record,
int numOfAssociations)
Adds a new record with the number of associations to the state view.
|
void |
JoinRecordStateView.retractRecord(RowData record)
Retract the record from the state view.
|
void |
OuterJoinRecordStateView.updateNumOfAssociations(RowData record,
int numOfAssociations)
Updates the number of associations belongs to the record.
|
Modifier and Type | Method and Description |
---|---|
static OuterJoinRecordStateView |
OuterJoinRecordStateViews.create(RuntimeContext ctx,
String stateName,
JoinInputSideSpec inputSideSpec,
InternalTypeInfo<RowData> recordType,
long retentionTime)
Creates a
OuterJoinRecordStateView depends on JoinInputSideSpec . |
static JoinRecordStateView |
JoinRecordStateViews.create(RuntimeContext ctx,
String stateName,
JoinInputSideSpec inputSideSpec,
InternalTypeInfo<RowData> recordType,
long retentionTime)
Creates a
JoinRecordStateView depends on JoinInputSideSpec . |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that the input has an unique key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that the input has an unique key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that the input has an unique key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKeyContainedByJoinKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that input has an unique key and the unique key is
contained by the join key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKeyContainedByJoinKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that input has an unique key and the unique key is
contained by the join key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKeyContainedByJoinKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that input has an unique key and the unique key is
contained by the join key. |
Modifier and Type | Method and Description |
---|---|
void |
TemporalProcessTimeJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
TemporalRowTimeJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
TemporalProcessTimeJoinOperator.processElement2(StreamRecord<RowData> element) |
void |
TemporalRowTimeJoinOperator.processElement2(StreamRecord<RowData> element) |
Constructor and Description |
---|
TemporalProcessTimeJoinOperator(InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
long minRetentionTime,
long maxRetentionTime,
boolean isLeftOuterJoin) |
TemporalRowTimeJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
int leftTimeAttribute,
int rightTimeAttribute,
long minRetentionTime,
long maxRetentionTime,
boolean isLeftOuterJoin) |
TemporalRowTimeJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
int leftTimeAttribute,
int rightTimeAttribute,
long minRetentionTime,
long maxRetentionTime,
boolean isLeftOuterJoin) |
Modifier and Type | Field and Description |
---|---|
protected TimestampedCollector<RowData> |
WindowJoinOperator.collector
This is used for emitting elements with a given timestamp.
|
Modifier and Type | Method and Description |
---|---|
abstract void |
WindowJoinOperator.join(Iterable<RowData> leftRecords,
Iterable<RowData> rightRecords) |
abstract void |
WindowJoinOperator.join(Iterable<RowData> leftRecords,
Iterable<RowData> rightRecords) |
WindowJoinOperatorBuilder |
WindowJoinOperatorBuilder.leftSerializer(TypeSerializer<RowData> leftSerializer) |
void |
WindowJoinOperator.onEventTime(InternalTimer<RowData,Long> timer) |
void |
WindowJoinOperator.onProcessingTime(InternalTimer<RowData,Long> timer) |
void |
WindowJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
WindowJoinOperator.processElement2(StreamRecord<RowData> element) |
WindowJoinOperatorBuilder |
WindowJoinOperatorBuilder.rightSerializer(TypeSerializer<RowData> rightSerializer) |
Modifier and Type | Method and Description |
---|---|
int |
RowDataEventComparator.compare(RowData row1,
RowData row2) |
boolean |
IterativeConditionRunner.filter(RowData value,
IterativeCondition.Context<RowData> ctx) |
Modifier and Type | Method and Description |
---|---|
boolean |
IterativeConditionRunner.filter(RowData value,
IterativeCondition.Context<RowData> ctx) |
void |
PatternProcessFunctionRunner.processMatch(Map< |