Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.hbase.client.Mutation |
RowDataToMutationConverter.convertToMutation(RowData record) |
Modifier and Type | Method and Description |
---|---|
protected abstract InputFormat<RowData,?> |
AbstractHBaseDynamicTableSource.getInputFormat() |
Modifier and Type | Method and Description |
---|---|
RowData |
HBaseSerde.convertToRow(org.apache.hadoop.hbase.client.Result result)
Converts HBase
Result into RowData . |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.hbase.client.Delete |
HBaseSerde.createDeleteMutation(RowData row)
Returns an instance of Delete that remove record from HBase table.
|
org.apache.hadoop.hbase.client.Put |
HBaseSerde.createPutMutation(RowData row)
Returns an instance of Put that writes record to HBase table.
|
Modifier and Type | Method and Description |
---|---|
protected RowData |
HBaseRowDataInputFormat.mapResultToOutType(org.apache.hadoop.hbase.client.Result res) |
Modifier and Type | Method and Description |
---|---|
InputFormat<RowData,?> |
HBaseDynamicTableSource.getInputFormat() |
Modifier and Type | Method and Description |
---|---|
protected RowData |
HBaseRowDataInputFormat.mapResultToOutType(org.apache.hadoop.hbase.client.Result res) |
Modifier and Type | Method and Description |
---|---|
protected InputFormat<RowData,?> |
HBaseDynamicTableSource.getInputFormat() |
Modifier and Type | Method and Description |
---|---|
RowData |
AbstractJdbcRowConverter.toInternal(ResultSet resultSet) |
RowData |
JdbcRowConverter.toInternal(ResultSet resultSet)
|
Modifier and Type | Method and Description |
---|---|
FieldNamedPreparedStatement |
AbstractJdbcRowConverter.toExternal(RowData rowData,
FieldNamedPreparedStatement statement) |
FieldNamedPreparedStatement |
JdbcRowConverter.toExternal(RowData rowData,
FieldNamedPreparedStatement statement)
Convert data retrieved from Flink internal RowData to JDBC Object.
|
Modifier and Type | Method and Description |
---|---|
void |
TableInsertOrUpdateStatementExecutor.addToBatch(RowData record) |
void |
TableBufferedStatementExecutor.addToBatch(RowData record) |
void |
TableBufferReducedStatementExecutor.addToBatch(RowData record) |
void |
TableSimpleStatementExecutor.addToBatch(RowData record) |
Modifier and Type | Method and Description |
---|---|
RowData |
JdbcRowDataInputFormat.nextRecord(RowData reuse)
Stores the next resultSet row in a tuple.
|
Modifier and Type | Method and Description |
---|---|
JdbcBatchingOutputFormat<RowData,?,?> |
JdbcDynamicOutputFormatBuilder.build() |
TypeInformation<RowData> |
JdbcRowDataInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
RowData |
JdbcRowDataInputFormat.nextRecord(RowData reuse)
Stores the next resultSet row in a tuple.
|
Modifier and Type | Method and Description |
---|---|
JdbcDynamicOutputFormatBuilder |
JdbcDynamicOutputFormatBuilder.setRowDataTypeInfo(TypeInformation<RowData> rowDataTypeInfo) |
JdbcRowDataInputFormat.Builder |
JdbcRowDataInputFormat.Builder.setRowDataTypeInfo(TypeInformation<RowData> rowDataTypeInfo) |
Modifier and Type | Method and Description |
---|---|
protected DataStream<RowData> |
HiveTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
Modifier and Type | Method and Description |
---|---|
LinkedHashMap<String,String> |
HiveRowDataPartitionComputer.generatePartValues(RowData in) |
Modifier and Type | Method and Description |
---|---|
RowData |
HiveMapredSplitReader.nextRecord(RowData reuse) |
RowData |
SplitReader.nextRecord(RowData reuse)
Reads the next record from the input.
|
RowData |
HiveVectorizedOrcSplitReader.nextRecord(RowData reuse) |
RowData |
HiveTableInputFormat.nextRecord(RowData reuse) |
RowData |
HiveTableFileInputFormat.nextRecord(RowData reuse) |
RowData |
HiveVectorizedParquetSplitReader.nextRecord(RowData reuse) |
RowData |
HiveInputFormatPartitionReader.read(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
CompactReader<RowData> |
HiveCompactReaderFactory.create(CompactContext context) |
BulkFormat.Reader<RowData> |
HiveBulkFormatAdapter.createReader(Configuration config,
HiveSourceSplit split) |
TypeInformation<RowData> |
HiveBulkFormatAdapter.getProducedType() |
BulkFormat.Reader<RowData> |
HiveBulkFormatAdapter.restoreReader(Configuration config,
HiveSourceSplit split) |
Modifier and Type | Method and Description |
---|---|
RowData |
HiveMapredSplitReader.nextRecord(RowData reuse) |
RowData |
SplitReader.nextRecord(RowData reuse)
Reads the next record from the input.
|
RowData |
HiveVectorizedOrcSplitReader.nextRecord(RowData reuse) |
RowData |
HiveTableInputFormat.nextRecord(RowData reuse) |
RowData |
HiveTableFileInputFormat.nextRecord(RowData reuse) |
RowData |
HiveVectorizedParquetSplitReader.nextRecord(RowData reuse) |
RowData |
HiveInputFormatPartitionReader.read(RowData reuse) |
default void |
SplitReader.seekToRow(long rowCount,
RowData reuse)
Seek to a particular row number.
|
void |
HiveVectorizedOrcSplitReader.seekToRow(long rowCount,
RowData reuse) |
void |
HiveVectorizedParquetSplitReader.seekToRow(long rowCount,
RowData reuse) |
Modifier and Type | Method and Description |
---|---|
HadoopPathBasedBulkWriter<RowData> |
HiveBulkWriterFactory.create(org.apache.hadoop.fs.Path targetPath,
org.apache.hadoop.fs.Path inProgressPath) |
java.util.function.Function<RowData,org.apache.hadoop.io.Writable> |
HiveWriterFactory.createRowDataConverter() |
Modifier and Type | Method and Description |
---|---|
RowData |
AvroRowDataDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
AvroFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
AvroFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<BulkWriter.Factory<RowData>> |
AvroFileFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
InputFormat<RowData,?> |
AvroFileSystemFormatFactory.createReader(FileSystemFormatFactory.ReaderContext context) |
TypeInformation<RowData> |
AvroRowDataDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
AvroRowDataDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
AvroRowDataSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
AvroRowDataDeserializationSchema(DeserializationSchema<org.apache.avro.generic.GenericRecord> nestedSchema,
AvroToRowDataConverters.AvroToRowDataConverter runtimeConverter,
TypeInformation<RowData> typeInfo)
Creates a Avro deserialization schema for the given logical type.
|
AvroRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> typeInfo)
Creates a Avro deserialization schema for the given logical type.
|
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
RegistryAvroFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
RegistryAvroFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
Modifier and Type | Method and Description |
---|---|
RowData |
DebeziumAvroDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
DebeziumAvroFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
DebeziumAvroFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
DebeziumAvroDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
DebeziumAvroDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
DebeziumAvroSerializationSchema.serialize(RowData rowData) |
Modifier and Type | Method and Description |
---|---|
void |
DebeziumAvroDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Constructor and Description |
---|
DebeziumAvroDeserializationSchema(RowType rowType,
TypeInformation<RowData> producedTypeInfo,
String schemaRegistryUrl) |
DebeziumAvroDeserializationSchema(RowType rowType,
TypeInformation<RowData> producedTypeInfo,
String schemaRegistryUrl,
Map<String,?> registryConfigs) |
Modifier and Type | Method and Description |
---|---|
RowData |
CsvRowDataDeserializationSchema.deserialize(byte[] message) |
RowData |
CsvFileSystemFormatFactory.CsvInputFormat.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
CsvFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
CsvFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
InputFormat<RowData,?> |
CsvFileSystemFormatFactory.createReader(FileSystemFormatFactory.ReaderContext context) |
TypeInformation<RowData> |
CsvRowDataDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonNode |
RowDataToCsvConverters.RowDataToCsvConverter.convert(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.dataformat.csv.CsvMapper csvMapper,
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ContainerNode<?> container,
RowData row) |
boolean |
CsvRowDataDeserializationSchema.isEndOfStream(RowData nextElement) |
RowData |
CsvFileSystemFormatFactory.CsvInputFormat.nextRecord(RowData reuse) |
byte[] |
CsvRowDataSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
Builder(RowType rowType,
TypeInformation<RowData> resultTypeInfo)
Creates a CSV deserialization schema for the given
TypeInformation with optional
parameters. |
Modifier and Type | Method and Description |
---|---|
RowData |
JsonRowDataDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
JsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
JsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
JsonRowDataDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
JsonRowDataDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
JsonRowDataSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
JsonRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> resultTypeInfo,
boolean failOnMissingField,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
Modifier and Type | Method and Description |
---|---|
RowData |
CanalJsonDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
CanalJsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
CanalJsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
CanalJsonDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
CanalJsonDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
CanalJsonSerializationSchema.serialize(RowData row) |
Modifier and Type | Method and Description |
---|---|
static CanalJsonDeserializationSchema.Builder |
CanalJsonDeserializationSchema.builder(RowType rowType,
TypeInformation<RowData> resultTypeInfo)
Creates A builder for building a
CanalJsonDeserializationSchema . |
void |
CanalJsonDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
RowData |
DebeziumJsonDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
DebeziumJsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
DebeziumJsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
DebeziumJsonDecodingFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType physicalDataType) |
TypeInformation<RowData> |
DebeziumJsonDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
DebeziumJsonDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
DebeziumJsonSerializationSchema.serialize(RowData rowData) |
Modifier and Type | Method and Description |
---|---|
void |
DebeziumJsonDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Constructor and Description |
---|
DebeziumJsonDeserializationSchema(DataType physicalDataType,
List<org.apache.flink.formats.json.debezium.DebeziumJsonDecodingFormat.ReadableMetadata> requestedMetadata,
TypeInformation<RowData> producedTypeInfo,
boolean schemaInclude,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
Modifier and Type | Method and Description |
---|---|
RowData |
MaxwellJsonDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
MaxwellJsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
MaxwellJsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
MaxwellJsonDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
MaxwellJsonDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
MaxwellJsonSerializationSchema.serialize(RowData element) |
Modifier and Type | Method and Description |
---|---|
void |
MaxwellJsonDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Constructor and Description |
---|
MaxwellJsonDeserializationSchema(RowType rowType,
TypeInformation<RowData> resultTypeInfo,
boolean ignoreParseErrors,
TimestampFormat timestampFormatOption) |
Modifier and Type | Method and Description |
---|---|
BulkDecodingFormat<RowData> |
ParquetFileFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<BulkWriter.Factory<RowData>> |
ParquetFileFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
protected ParquetVectorizedInputFormat.ParquetReaderBatch<RowData> |
ParquetColumnarRowInputFormat.createReaderBatch(WritableColumnVector[] writableVectors,
VectorizedColumnBatch columnarBatch,
Pool.Recycler<ParquetVectorizedInputFormat.ParquetReaderBatch<RowData>> recycler) |
TypeInformation<RowData> |
ParquetColumnarRowInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
protected ParquetVectorizedInputFormat.ParquetReaderBatch<RowData> |
ParquetColumnarRowInputFormat.createReaderBatch(WritableColumnVector[] writableVectors,
VectorizedColumnBatch columnarBatch,
Pool.Recycler<ParquetVectorizedInputFormat.ParquetReaderBatch<RowData>> recycler) |
Modifier and Type | Method and Description |
---|---|
org.apache.parquet.hadoop.ParquetWriter<RowData> |
ParquetRowDataBuilder.FlinkParquetBuilder.createWriter(org.apache.parquet.io.OutputFile out) |
static ParquetWriterFactory<RowData> |
ParquetRowDataBuilder.createWriterFactory(RowType rowType,
Configuration conf,
boolean utcTimestamp)
Create a parquet
BulkWriter.Factory . |
protected org.apache.parquet.hadoop.api.WriteSupport<RowData> |
ParquetRowDataBuilder.getWriteSupport(Configuration conf) |
Modifier and Type | Method and Description |
---|---|
void |
ParquetRowDataWriter.write(RowData record)
It writes a record to Parquet.
|
Modifier and Type | Method and Description |
---|---|
RowData |
OrcColumnarRowSplitReader.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
BulkDecodingFormat<RowData> |
OrcFileFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<BulkWriter.Factory<RowData>> |
OrcFileFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
AbstractOrcFileInputFormat.OrcReaderBatch<RowData,BatchT> |
OrcColumnarRowFileInputFormat.createReaderBatch(SplitT split,
OrcVectorizedBatchWrapper<BatchT> orcBatch,
Pool.Recycler<AbstractOrcFileInputFormat.OrcReaderBatch<RowData,BatchT>> recycler,
int batchSize) |
TypeInformation<RowData> |
OrcColumnarRowFileInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
RowData |
OrcColumnarRowSplitReader.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
AbstractOrcFileInputFormat.OrcReaderBatch<RowData,BatchT> |
OrcColumnarRowFileInputFormat.createReaderBatch(SplitT split,
OrcVectorizedBatchWrapper<BatchT> orcBatch,
Pool.Recycler<AbstractOrcFileInputFormat.OrcReaderBatch<RowData,BatchT>> recycler,
int batchSize) |
Modifier and Type | Method and Description |
---|---|
BulkWriter<RowData> |
OrcNoHiveBulkWriterFactory.create(FSDataOutputStream out) |
Modifier and Type | Method and Description |
---|---|
void |
RowDataVectorizer.vectorize(RowData row,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
Modifier and Type | Field and Description |
---|---|
protected DecodingFormat<DeserializationSchema<RowData>> |
KafkaDynamicSource.keyDecodingFormat
Optional format for decoding keys from Kafka.
|
protected EncodingFormat<SerializationSchema<RowData>> |
KafkaDynamicSink.keyEncodingFormat
Optional format for encoding keys to Kafka.
|
protected FlinkKafkaPartitioner<RowData> |
KafkaDynamicSink.partitioner
Partitioner to select Kafka partition for each item.
|
protected DecodingFormat<DeserializationSchema<RowData>> |
KafkaDynamicSource.valueDecodingFormat
Format for decoding values from Kafka.
|
protected EncodingFormat<SerializationSchema<RowData>> |
KafkaDynamicSink.valueEncodingFormat
Format for encoding values to Kafka.
|
protected WatermarkStrategy<RowData> |
KafkaDynamicSource.watermarkStrategy
Watermark strategy that is used to generate per-partition watermark.
|
Modifier and Type | Method and Description |
---|---|
protected FlinkKafkaConsumer<RowData> |
KafkaDynamicSource.createKafkaConsumer(DeserializationSchema<RowData> keyDeserialization,
DeserializationSchema<RowData> valueDeserialization,
TypeInformation<RowData> producedTypeInfo) |
protected FlinkKafkaProducer<RowData> |
KafkaDynamicSink.createKafkaProducer(SerializationSchema<RowData> keySerialization,
SerializationSchema<RowData> valueSerialization) |
DeserializationSchema<RowData> |
UpsertKafkaDynamicTableFactory.DecodingFormatWrapper.createRuntimeDecoder(DynamicTableSource.Context context,
DataType producedDataType) |
SerializationSchema<RowData> |
UpsertKafkaDynamicTableFactory.EncodingFormatWrapper.createRuntimeEncoder(DynamicTableSink.Context context,
DataType consumedDataType) |
static Optional<FlinkKafkaPartitioner<RowData>> |
KafkaOptions.getFlinkKafkaPartitioner(ReadableConfig tableOptions,
ClassLoader classLoader)
The partitioner can be either "fixed", "round-robin" or a customized partitioner full class
name.
|
Modifier and Type | Method and Description |
---|---|
void |
KafkaDynamicSource.applyWatermark(WatermarkStrategy<RowData> watermarkStrategy) |
protected FlinkKafkaConsumer<RowData> |
KafkaDynamicSource.createKafkaConsumer(DeserializationSchema<RowData> keyDeserialization,
DeserializationSchema<RowData> valueDeserialization,
TypeInformation<RowData> producedTypeInfo) |
protected FlinkKafkaConsumer<RowData> |
KafkaDynamicSource.createKafkaConsumer(DeserializationSchema<RowData> keyDeserialization,
DeserializationSchema<RowData> valueDeserialization,
TypeInformation<RowData> producedTypeInfo) |
protected FlinkKafkaConsumer<RowData> |
KafkaDynamicSource.createKafkaConsumer(DeserializationSchema<RowData> keyDeserialization,
DeserializationSchema<RowData> valueDeserialization,
TypeInformation<RowData> producedTypeInfo) |
protected FlinkKafkaProducer<RowData> |
KafkaDynamicSink.createKafkaProducer(SerializationSchema<RowData> keySerialization,
SerializationSchema<RowData> valueSerialization) |
protected FlinkKafkaProducer<RowData> |
KafkaDynamicSink.createKafkaProducer(SerializationSchema<RowData> keySerialization,
SerializationSchema<RowData> valueSerialization) |
protected KafkaDynamicSink |
KafkaDynamicTableFactory.createKafkaTableSink(DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
KafkaSinkSemantic semantic,
Integer parallelism) |
protected KafkaDynamicSink |
KafkaDynamicTableFactory.createKafkaTableSink(DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
KafkaSinkSemantic semantic,
Integer parallelism) |
protected KafkaDynamicSink |
KafkaDynamicTableFactory.createKafkaTableSink(DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
KafkaSinkSemantic semantic,
Integer parallelism) |
protected KafkaDynamicSource |
KafkaDynamicTableFactory.createKafkaTableSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis) |
protected KafkaDynamicSource |
KafkaDynamicTableFactory.createKafkaTableSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis) |
Constructor and Description |
---|
DecodingFormatWrapper(DecodingFormat<DeserializationSchema<RowData>> innerDecodingFormat) |
EncodingFormatWrapper(EncodingFormat<SerializationSchema<RowData>> innerEncodingFormat) |
KafkaDynamicSink(DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
KafkaSinkSemantic semantic,
boolean upsertMode,
Integer parallelism) |
KafkaDynamicSink(DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
KafkaSinkSemantic semantic,
boolean upsertMode,
Integer parallelism) |
KafkaDynamicSink(DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
KafkaSinkSemantic semantic,
boolean upsertMode,
Integer parallelism) |
KafkaDynamicSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
boolean upsertMode) |
KafkaDynamicSource(DataType physicalDataType,
DecodingFormat<DeserializationSchema<RowData>> keyDecodingFormat,
DecodingFormat<DeserializationSchema<RowData>> valueDecodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
List<String> topics,
Pattern topicPattern,
Properties properties,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis,
boolean upsertMode) |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataKinesisDeserializationSchema.deserialize(byte[] recordValue,
String partitionKey,
String seqNum,
long approxArrivalTimestamp,
String stream,
String shardId) |
Modifier and Type | Method and Description |
---|---|
static KinesisPartitioner<RowData> |
KinesisOptions.getKinesisPartitioner(ReadableConfig tableOptions,
CatalogTable targetTable,
ClassLoader classLoader)
Constructs the kinesis partitioner for a
targetTable based on the currently set
tableOptions . |
TypeInformation<RowData> |
RowDataKinesisDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
String |
RowDataFieldsKinesisPartitioner.getPartitionId(RowData element) |
Modifier and Type | Method and Description |
---|---|
BulkWriter.Factory<RowData> |
HiveShimV200.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
BulkWriter.Factory<RowData> |
HiveShimV100.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
BulkWriter.Factory<RowData> |
HiveShim.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes)
Create orc
BulkWriter.Factory for different hive versions. |
Modifier and Type | Method and Description |
---|---|
OutputFormat<RowData> |
OutputFormatProvider.createOutputFormat()
Creates an
OutputFormat instance. |
SinkFunction<RowData> |
SinkFunctionProvider.createSinkFunction()
Creates a
SinkFunction instance. |
Modifier and Type | Method and Description |
---|---|
DataStreamSink<?> |
DataStreamSinkProvider.consumeDataStream(DataStream<RowData> dataStream)
Consumes the given Java
DataStream and returns the sink transformation DataStreamSink . |
static OutputFormatProvider |
OutputFormatProvider.of(OutputFormat<RowData> outputFormat)
Helper method for creating a static provider.
|
static SinkFunctionProvider |
SinkFunctionProvider.of(SinkFunction<RowData> sinkFunction)
Helper method for creating a static provider.
|
static SinkFunctionProvider |
SinkFunctionProvider.of(SinkFunction<RowData> sinkFunction,
Integer sinkParallelism)
Helper method for creating a SinkFunction provider with a provided sink parallelism.
|
Modifier and Type | Method and Description |
---|---|
InputFormat<RowData,?> |
InputFormatProvider.createInputFormat()
Creates an
InputFormat instance. |
Source<RowData,?,?> |
SourceProvider.createSource()
Creates a
Source instance. |
SourceFunction<RowData> |
SourceFunctionProvider.createSourceFunction()
Creates a
SourceFunction instance. |
DataStream<RowData> |
DataStreamScanProvider.produceDataStream(StreamExecutionEnvironment execEnv)
Creates a scan Java
DataStream from a StreamExecutionEnvironment . |
Modifier and Type | Method and Description |
---|---|
static InputFormatProvider |
InputFormatProvider.of(InputFormat<RowData,?> inputFormat)
Helper method for creating a static provider.
|
static SourceProvider |
SourceProvider.of(Source<RowData,?,?> source)
Helper method for creating a static provider.
|
static SourceFunctionProvider |
SourceFunctionProvider.of(SourceFunction<RowData> sourceFunction,
boolean isBounded)
Helper method for creating a static provider.
|
Modifier and Type | Method and Description |
---|---|
void |
SupportsWatermarkPushDown.applyWatermark(WatermarkStrategy<RowData> watermarkStrategy)
Provides a
WatermarkStrategy which defines how to generate Watermark s in the
stream source. |
Modifier and Type | Class and Description |
---|---|
class |
BoxedWrapperRowData
An implementation of
RowData which also is also backed by an array of Java Object , just similar to GenericRowData . |
class |
ColumnarRowData
Columnar row to support access to vector column data.
|
class |
GenericRowData
An internal data structure representing data of
RowType and other (possibly nested)
structured types such as StructuredType . |
class |
UpdatableRowData
|
Modifier and Type | Method and Description |
---|---|
RowData |
UpdatableRowData.getRow() |
RowData |
ColumnarRowData.getRow(int pos,
int numFields) |
RowData |
ColumnarArrayData.getRow(int pos,
int numFields) |
RowData |
BoxedWrapperRowData.getRow(int pos,
int numFields) |
RowData |
UpdatableRowData.getRow(int pos,
int numFields) |
RowData |
GenericRowData.getRow(int pos,
int numFields) |
RowData |
GenericArrayData.getRow(int pos,
int numFields) |
RowData |
RowData.getRow(int pos,
int numFields)
Returns the row value at the given position.
|
RowData |
ArrayData.getRow(int pos,
int numFields)
Returns the row value at the given position.
|
Modifier and Type | Method and Description |
---|---|
Object |
RowData.FieldGetter.getFieldOrNull(RowData row) |
Constructor and Description |
---|
UpdatableRowData(RowData row,
int arity) |
Modifier and Type | Class and Description |
---|---|
class |
BinaryRowData
An implementation of
RowData which is backed by MemorySegment instead of Object. |
class |
NestedRowData
Its memory storage structure is exactly the same with
BinaryRowData . |
Modifier and Type | Method and Description |
---|---|
RowData |
BinaryRowData.getRow(int pos,
int numFields) |
RowData |
BinaryArrayData.getRow(int pos,
int numFields) |
RowData |
NestedRowData.getRow(int pos,
int numFields) |
static RowData |
BinarySegmentUtils.readRowData(MemorySegment[] segments,
int numFields,
int baseOffset,
long offsetAndSize)
Gets an instance of
RowData from underlying MemorySegment . |
Modifier and Type | Method and Description |
---|---|
NestedRowData |
NestedRowData.copy(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
RowData |
RowRowConverter.toInternal(Row external) |
RowData |
StructuredObjectConverter.toInternal(T external) |
Modifier and Type | Method and Description |
---|---|
T |
StructuredObjectConverter.toExternal(RowData internal) |
Row |
RowRowConverter.toExternal(RowData internal) |
Modifier and Type | Method and Description |
---|---|
static boolean |
RowDataUtil.isAccumulateMsg(RowData row)
Returns true if the message is either
RowKind.INSERT or RowKind.UPDATE_AFTER ,
which refers to an accumulate operation of aggregation. |
static boolean |
RowDataUtil.isRetractMsg(RowData row)
Returns true if the message is either
RowKind.DELETE or RowKind.UPDATE_BEFORE , which refers to a retract operation of aggregation. |
External |
DataFormatConverters.DataFormatConverter.toExternal(RowData row,
int column)
Given a internalType row, convert the value at column `column` to its external(Java)
equivalent.
|
Modifier and Type | Class and Description |
---|---|
class |
JoinedRowData
|
Modifier and Type | Method and Description |
---|---|
RowData |
JoinedRowData.getRow(int pos,
int numFields) |
Modifier and Type | Method and Description |
---|---|
JoinedRowData |
JoinedRowData.replace(RowData row1,
RowData row2) |
Constructor and Description |
---|
JoinedRowData(RowData row1,
RowData row2) |
Modifier and Type | Method and Description |
---|---|
RowData |
VectorizedColumnBatch.getRow(int rowId,
int colId) |
Modifier and Type | Method and Description |
---|---|
void |
BinaryWriter.writeRow(int pos,
RowData value,
RowDataSerializer serializer) |
Modifier and Type | Method and Description |
---|---|
RowData |
ChangelogCsvDeserializer.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
ChangelogCsvFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
ChangelogCsvFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType producedDataType) |
TypeInformation<RowData> |
ChangelogCsvDeserializer.getProducedType() |
TypeInformation<RowData> |
SocketSourceFunction.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
ChangelogCsvDeserializer.isEndOfStream(RowData nextElement) |
Modifier and Type | Method and Description |
---|---|
void |
SocketSourceFunction.run(SourceFunction.SourceContext<RowData> ctx) |
Constructor and Description |
---|
ChangelogCsvDeserializer(List<LogicalType> parsingTypes,
DynamicTableSource.DataStructureConverter converter,
TypeInformation<RowData> producedTypeInfo,
String columnDelimiter) |
SocketDynamicTableSource(String hostname,
int port,
byte byteDelimiter,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat,
DataType producedDataType) |
SocketSourceFunction(String hostname,
int port,
byte byteDelimiter,
DeserializationSchema<RowData> deserializer) |
Modifier and Type | Method and Description |
---|---|
BulkDecodingFormat<RowData> |
BulkReaderFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions)
Creates a
BulkDecodingFormat from the given context and format options. |
InputFormat<RowData,?> |
FileSystemFormatFactory.createReader(FileSystemFormatFactory.ReaderContext context)
Deprecated.
Create
InputFormat reader. |
Modifier and Type | Method and Description |
---|---|
DataGeneratorSource<RowData> |
DataGenTableSource.createSource() |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataGenerator.next() |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataPartitionComputer.projectColumnsToWrite(RowData in) |
Modifier and Type | Method and Description |
---|---|
BulkWriter<RowData> |
FileSystemTableSink.ProjectionBulkFactory.create(FSDataOutputStream out) |
PartitionReader<P,RowData> |
FileSystemLookupFunction.getPartitionReader() |
TypeInformation<RowData> |
DeserializationSchemaAdapter.getProducedType() |
TypeInformation<RowData> |
FileSystemLookupFunction.getResultType() |
RecordAndPosition<RowData> |
ColumnarRowIterator.next() |
Modifier and Type | Method and Description |
---|---|
void |
SerializationSchemaAdapter.encode(RowData element,
OutputStream stream) |
LinkedHashMap<String,String> |
RowDataPartitionComputer.generatePartValues(RowData in) |
String |
FileSystemTableSink.TableBucketAssigner.getBucketId(RowData element,
BucketAssigner.Context context) |
RowData |
RowDataPartitionComputer.projectColumnsToWrite(RowData in) |
boolean |
FileSystemTableSink.TableRollingPolicy.shouldRollOnEvent(PartFileInfo<String> partFileState,
RowData element) |
Modifier and Type | Method and Description |
---|---|
RowData |
RawFormatDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
RawFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
RawFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
RawFormatDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
RawFormatDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
RawFormatSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
RawFormatDeserializationSchema(LogicalType deserializedType,
TypeInformation<RowData> producedTypeInfo,
String charsetName,
boolean isBigEndian) |
Modifier and Type | Method and Description |
---|---|
RowData |
LastValueAggFunction.createAccumulator() |
RowData |
FirstValueAggFunction.createAccumulator() |
Modifier and Type | Method and Description |
---|---|
void |
LastValueAggFunction.accumulate(RowData rowData,
Object value) |
void |
FirstValueAggFunction.accumulate(RowData rowData,
Object value) |
void |
LastValueAggFunction.accumulate(RowData rowData,
Object value,
Long order) |
void |
FirstValueAggFunction.accumulate(RowData rowData,
Object value,
Long order) |
void |
FirstValueAggFunction.accumulate(RowData rowData,
StringData value) |
void |
FirstValueAggFunction.accumulate(RowData rowData,
StringData value,
Long order) |
T |
LastValueAggFunction.getValue(RowData rowData) |
T |
FirstValueAggFunction.getValue(RowData acc) |
void |
LastValueAggFunction.resetAccumulator(RowData rowData) |
void |
FirstValueAggFunction.resetAccumulator(RowData rowData) |
Modifier and Type | Method and Description |
---|---|
static RowDataKeySelector |
KeySelectorUtil.getRowDataSelector(int[] keyFields,
InternalTypeInfo<RowData> rowType)
Create a RowDataKeySelector to extract keys from DataStream which type is
InternalTypeInfo of RowData . |
Modifier and Type | Field and Description |
---|---|
protected DataFormatConverters.DataFormatConverter<RowData,Row> |
SelectTableSinkBase.converter |
Modifier and Type | Method and Description |
---|---|
protected static InternalTypeInfo<RowData> |
SelectTableSinkBase.createTypeInfo(TableSchema tableSchema)
Create
InternalTypeInfo of RowData based on given table schema. |
TypeInformation<RowData> |
BatchSelectTableSink.getOutputType() |
TypeInformation<RowData> |
StreamSelectTableSink.getRecordType() |
Modifier and Type | Method and Description |
---|---|
protected Row |
BatchSelectTableSink.convertToRow(RowData element) |
Modifier and Type | Method and Description |
---|---|
protected Row |
StreamSelectTableSink.convertToRow(Tuple2<Boolean,RowData> tuple2) |
Modifier and Type | Method and Description |
---|---|
static ArrowWriter<RowData> |
ArrowUtils.createRowDataArrowWriter(org.apache.arrow.vector.VectorSchemaRoot root,
RowType rowType)
Creates an
ArrowWriter for blink planner for the specified VectorSchemaRoot . |
Modifier and Type | Method and Description |
---|---|
ArrowReader<RowData> |
RowDataArrowSerializer.createArrowReader(org.apache.arrow.vector.VectorSchemaRoot root) |
ArrowWriter<RowData> |
RowDataArrowSerializer.createArrowWriter() |
Modifier and Type | Method and Description |
---|---|
DataStream<RowData> |
ArrowTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
TypeInformation<RowData> |
ArrowSourceFunction.getProducedType() |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataArrowReader.read(int rowId) |
Modifier and Type | Method and Description |
---|---|
static BigIntWriter<RowData> |
BigIntWriter.forRow(org.apache.arrow.vector.BigIntVector bigIntVector) |
static BooleanWriter<RowData> |
BooleanWriter.forRow(org.apache.arrow.vector.BitVector bitVector) |
static DateWriter<RowData> |
DateWriter.forRow(org.apache.arrow.vector.DateDayVector dateDayVector) |
static DecimalWriter<RowData> |
DecimalWriter.forRow(org.apache.arrow.vector.DecimalVector decimalVector,
int precision,
int scale) |
static FloatWriter<RowData> |
FloatWriter.forRow(org.apache.arrow.vector.Float4Vector floatVector) |
static DoubleWriter<RowData> |
DoubleWriter.forRow(org.apache.arrow.vector.Float8Vector doubleVector) |
static IntWriter<RowData> |
IntWriter.forRow(org.apache.arrow.vector.IntVector intVector) |
static ArrayWriter<RowData> |
ArrayWriter.forRow(org.apache.arrow.vector.complex.ListVector listVector,
ArrowFieldWriter<ArrayData> elementWriter) |
static SmallIntWriter<RowData> |
SmallIntWriter.forRow(org.apache.arrow.vector.SmallIntVector intVector) |
static RowWriter<RowData> |
RowWriter.forRow(org.apache.arrow.vector.complex.StructVector structVector,
ArrowFieldWriter<RowData>[] fieldsWriters) |
static TinyIntWriter<RowData> |
TinyIntWriter.forRow(org.apache.arrow.vector.TinyIntVector tinyIntVector) |
static TimeWriter<RowData> |
TimeWriter.forRow(org.apache.arrow.vector.ValueVector valueVector) |
static TimestampWriter<RowData> |
TimestampWriter.forRow(org.apache.arrow.vector.ValueVector valueVector,
int precision) |
static VarBinaryWriter<RowData> |
VarBinaryWriter.forRow(org.apache.arrow.vector.VarBinaryVector varBinaryVector) |
static VarCharWriter<RowData> |
VarCharWriter.forRow(org.apache.arrow.vector.VarCharVector varCharVector) |
Modifier and Type | Method and Description |
---|---|
RowData |
ExecutionContext.currentKey() |
RowData |
ExecutionContextImpl.currentKey() |
Modifier and Type | Method and Description |
---|---|
void |
ExecutionContext.setCurrentKey(RowData key)
Sets current key.
|
void |
ExecutionContextImpl.setCurrentKey(RowData key) |
Modifier and Type | Interface and Description |
---|---|
interface |
Projection<IN extends RowData,OUT extends RowData>
Interface for code generated projection, which will map a RowData to another one.
|
interface |
Projection<IN extends RowData,OUT extends RowData>
Interface for code generated projection, which will map a RowData to another one.
|
Modifier and Type | Method and Description |
---|---|
RowData |
AggsHandleFunctionBase.createAccumulators()
Initializes the accumulators and save them to a accumulators row.
|
RowData |
NamespaceAggsHandleFunctionBase.createAccumulators()
Initializes the accumulators and save them to a accumulators row.
|
RowData |
AggsHandleFunctionBase.getAccumulators()
Gets the current accumulators (saved in a row) which contains the current aggregated results.
|
RowData |
NamespaceAggsHandleFunctionBase.getAccumulators()
Gets the current accumulators (saved in a row) which contains the current aggregated results.
|
RowData |
AggsHandleFunction.getValue()
Gets the result of the aggregation from the current accumulators.
|
RowData |
NamespaceAggsHandleFunction.getValue(N namespace)
Gets the result of the aggregation from the current accumulators and namespace properties
(like window start).
|
Modifier and Type | Method and Description |
---|---|
void |
AggsHandleFunctionBase.accumulate(RowData input)
Accumulates the input values to the accumulators.
|
void |
NamespaceAggsHandleFunctionBase.accumulate(RowData inputRow)
Accumulates the input values to the accumulators.
|
boolean |
JoinCondition.apply(RowData in1,
RowData in2) |
int |
RecordComparator.compare(RowData o1,
RowData o2) |
abstract Long |
WatermarkGenerator.currentWatermark(RowData row)
Returns the watermark for the current row or null if no watermark should be generated.
|
void |
TableAggsHandleFunction.emitValue(Collector<RowData> out,
RowData currentKey,
boolean isRetract)
Emit the result of the table aggregation through the collector.
|
void |
NamespaceTableAggsHandleFunction.emitValue(N namespace,
RowData key,
Collector<RowData> out)
Emits the result of the aggregation from the current accumulators and namespace properties
(like window start).
|
boolean |
RecordEqualiser.equals(RowData row1,
RowData row2)
Returns
true if the rows are equal to each other and false otherwise. |
int |
HashFunction.hashCode(RowData row) |
void |
NamespaceAggsHandleFunctionBase.merge(N namespace,
RowData otherAcc)
Merges the other accumulators into current accumulators.
|
void |
AggsHandleFunctionBase.merge(RowData accumulators)
Merges the other accumulators into current accumulators.
|
void |
NormalizedKeyComputer.putKey(RowData record,
MemorySegment target,
int offset)
Writes a normalized key for the given record into the target
MemorySegment . |
void |
AggsHandleFunctionBase.retract(RowData input)
Retracts the input values from the accumulators.
|
void |
NamespaceAggsHandleFunctionBase.retract(RowData inputRow)
Retracts the input values from the accumulators.
|
void |
NamespaceAggsHandleFunctionBase.setAccumulators(N namespace,
RowData accumulators)
Set the current accumulators (saved in a row) which contains the current aggregated results.
|
void |
AggsHandleFunctionBase.setAccumulators(RowData accumulators)
Set the current accumulators (saved in a row) which contains the current aggregated results.
|
Modifier and Type | Method and Description |
---|---|
void |
TableAggsHandleFunction.emitValue(Collector<RowData> out,
RowData currentKey,
boolean isRetract)
Emit the result of the table aggregation through the collector.
|
void |
NamespaceTableAggsHandleFunction.emitValue(N namespace,
RowData key,
Collector<RowData> out)
Emits the result of the aggregation from the current accumulators and namespace properties
(like window start).
|
Modifier and Type | Class and Description |
---|---|
class |
WrappedRowIterator<T extends RowData>
Wrap
MutableObjectIterator to java RowIterator . |
Modifier and Type | Method and Description |
---|---|
RowData |
ProbeIterator.current() |
RowData |
BinaryHashTable.getCurrentProbeRow() |
RowData |
LongHybridHashTable.getCurrentProbeRow() |
Modifier and Type | Method and Description |
---|---|
abstract long |
LongHybridHashTable.getBuildLongKey(RowData row)
For code gen get build side long key.
|
abstract long |
LongHybridHashTable.getProbeLongKey(RowData row)
For code gen get probe side long key.
|
abstract BinaryRowData |
LongHybridHashTable.probeToBinary(RowData row)
For code gen probe side to BinaryRowData.
|
void |
BinaryHashTable.putBuildRow(RowData row)
Put a build side row to hash table.
|
void |
ProbeIterator.setInstance(RowData instance) |
boolean |
BinaryHashTable.tryProbe(RowData record)
Find matched build side rows for a probe row.
|
boolean |
LongHybridHashTable.tryProbe(RowData record) |
Constructor and Description |
---|
BinaryHashTable(Configuration conf,
Object owner,
AbstractRowDataSerializer buildSideSerializer,
AbstractRowDataSerializer probeSideSerializer,
Projection<RowData,BinaryRowData> buildSideProjection,
Projection<RowData,BinaryRowData> probeSideProjection,
MemoryManager memManager,
long reservedMemorySize,
IOManager ioManager,
int avgRecordLen,
long buildRowCount,
boolean useBloomFilters,
HashJoinType type,
JoinCondition condFunc,
boolean reverseJoin,
boolean[] filterNulls,
boolean tryDistinctBuildRow) |
BinaryHashTable(Configuration conf,
Object owner,
AbstractRowDataSerializer buildSideSerializer,
AbstractRowDataSerializer probeSideSerializer,
Projection<RowData,BinaryRowData> buildSideProjection,
Projection<RowData,BinaryRowData> probeSideProjection,
MemoryManager memManager,
long reservedMemorySize,
IOManager ioManager,
int avgRecordLen,
long buildRowCount,
boolean useBloomFilters,
HashJoinType type,
JoinCondition condFunc,
boolean reverseJoin,
boolean[] filterNulls,
boolean tryDistinctBuildRow) |
Modifier and Type | Method and Description |
---|---|
RowData |
EmptyRowDataKeySelector.getKey(RowData value) |
RowData |
BinaryRowDataKeySelector.getKey(RowData value) |
Modifier and Type | Method and Description |
---|---|
InternalTypeInfo<RowData> |
RowDataKeySelector.getProducedType() |
InternalTypeInfo<RowData> |
EmptyRowDataKeySelector.getProducedType() |
InternalTypeInfo<RowData> |
BinaryRowDataKeySelector.getProducedType() |
Modifier and Type | Method and Description |
---|---|
RowData |
EmptyRowDataKeySelector.getKey(RowData value) |
RowData |
BinaryRowDataKeySelector.getKey(RowData value) |
Constructor and Description |
---|
BinaryRowDataKeySelector(InternalTypeInfo<RowData> keyRowType,
GeneratedProjection generatedProjection) |
Modifier and Type | Method and Description |
---|---|
RowData |
MiniBatchLocalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
RowData |
MiniBatchIncrementalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
RowData |
MiniBatchGlobalGroupAggFunction.addInput(RowData previousAcc,
RowData input)
The
previousAcc is accumulator, but input is a row in <key, accumulator>
schema, the specific generated MiniBatchGlobalGroupAggFunction.localAgg will project the input to
accumulator in merge method. |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
MiniBatchGroupAggFunction.addInput(List<RowData> value,
RowData input) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
MiniBatchGroupAggFunction.addInput(List<RowData> value,
RowData input) |
RowData |
MiniBatchLocalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
RowData |
MiniBatchIncrementalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
RowData |
MiniBatchGlobalGroupAggFunction.addInput(RowData previousAcc,
RowData input)
The
previousAcc is accumulator, but input is a row in <key, accumulator>
schema, the specific generated MiniBatchGlobalGroupAggFunction.localAgg will project the input to
accumulator in merge method. |
void |
GroupTableAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
GroupAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
MiniBatchGroupAggFunction.addInput(List<RowData> value,
RowData input) |
void |
MiniBatchGroupAggFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
MiniBatchGroupAggFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
MiniBatchGroupAggFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
MiniBatchLocalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchLocalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchLocalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchIncrementalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchIncrementalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchIncrementalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchGlobalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchGlobalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchGlobalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
GroupTableAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
GroupAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Constructor and Description |
---|
MiniBatchIncrementalGroupAggFunction(GeneratedAggsHandleFunction genPartialAggsHandler,
GeneratedAggsHandleFunction genFinalAggsHandler,
KeySelector<RowData,RowData> finalKeySelector,
long stateRetentionTime) |
MiniBatchIncrementalGroupAggFunction(GeneratedAggsHandleFunction genPartialAggsHandler,
GeneratedAggsHandleFunction genFinalAggsHandler,
KeySelector<RowData,RowData> finalKeySelector,
long stateRetentionTime) |
Modifier and Type | Method and Description |
---|---|
RowData |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.addInput(RowData value,
RowData input) |
RowData |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.addInput(RowData value,
RowData input) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
RowTimeMiniBatchDeduplicateFunction.addInput(List<RowData> value,
RowData input) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
RowTimeMiniBatchDeduplicateFunction.addInput(List<RowData> value,
RowData input) |
RowData |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.addInput(RowData value,
RowData input) |
RowData |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.addInput(RowData value,
RowData input) |
static void |
RowTimeDeduplicateFunction.deduplicateOnRowTime(ValueState<RowData> state,
RowData currentRow,
Collector<RowData> out,
boolean generateUpdateBefore,
boolean generateInsert,
int rowtimeIndex,
boolean keepLastRow)
Processes element to deduplicate on keys with row time semantic, sends current element if it
is last or first row, retracts previous element if needed.
|
void |
RowTimeDeduplicateFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeDeduplicateKeepFirstRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeDeduplicateKeepLastRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
RowTimeMiniBatchDeduplicateFunction.addInput(List<RowData> value,
RowData input) |
static void |
RowTimeDeduplicateFunction.deduplicateOnRowTime(ValueState<RowData> state,
RowData currentRow,
Collector<RowData> out,
boolean generateUpdateBefore,
boolean generateInsert,
int rowtimeIndex,
boolean keepLastRow)
Processes element to deduplicate on keys with row time semantic, sends current element if it
is last or first row, retracts previous element if needed.
|
static void |
RowTimeDeduplicateFunction.deduplicateOnRowTime(ValueState<RowData> state,
RowData currentRow,
Collector<RowData> out,
boolean generateUpdateBefore,
boolean generateInsert,
int rowtimeIndex,
boolean keepLastRow)
Processes element to deduplicate on keys with row time semantic, sends current element if it
is last or first row, retracts previous element if needed.
|
void |
RowTimeMiniBatchDeduplicateFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
RowTimeMiniBatchDeduplicateFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
RowTimeMiniBatchDeduplicateFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
RowTimeDeduplicateFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeDeduplicateKeepFirstRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeDeduplicateKeepLastRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Constructor and Description |
---|
ProcTimeDeduplicateKeepLastRowFunction(InternalTypeInfo<RowData> typeInfo,
long stateRetentionTime,
boolean generateUpdateBefore,
boolean generateInsert,
boolean inputInsertOnly) |
ProcTimeMiniBatchDeduplicateKeepFirstRowFunction(TypeSerializer<RowData> serializer,
long stateRetentionTime) |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long stateRetentionTime,
boolean generateUpdateBefore,
boolean generateInsert,
boolean inputInsertOnly) |
ProcTimeMiniBatchDeduplicateKeepLastRowFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long stateRetentionTime,
boolean generateUpdateBefore,
boolean generateInsert,
boolean inputInsertOnly) |
RowTimeDeduplicateFunction(InternalTypeInfo<RowData> typeInfo,
long minRetentionTime,
int rowtimeIndex,
boolean generateUpdateBefore,
boolean generateInsert,
boolean keepLastRow) |
RowTimeMiniBatchDeduplicateFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long minRetentionTime,
int rowtimeIndex,
boolean generateUpdateBefore,
boolean generateInsert,
boolean keepLastRow) |
RowTimeMiniBatchDeduplicateFunction(InternalTypeInfo<RowData> typeInfo,
TypeSerializer<RowData> serializer,
long minRetentionTime,
int rowtimeIndex,
boolean generateUpdateBefore,
boolean generateInsert,
boolean keepLastRow) |
Modifier and Type | Method and Description |
---|---|
RowData |
SortMergeJoinIterator.getProbeRow() |
RowData |
OuterJoinPaddingUtil.padLeft(RowData leftRow)
Returns a padding result with the given left row.
|
RowData |
OuterJoinPaddingUtil.padRight(RowData rightRow)
Returns a padding result with the given right row.
|
Modifier and Type | Method and Description |
---|---|
abstract void |
HashJoinOperator.join(RowIterator<BinaryRowData> buildIter,
RowData probeRow) |
RowData |
OuterJoinPaddingUtil.padLeft(RowData leftRow)
Returns a padding result with the given left row.
|
RowData |
OuterJoinPaddingUtil.padRight(RowData rightRow)
Returns a padding result with the given right row.
|
Modifier and Type | Method and Description |
---|---|
void |
HashJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
SortMergeJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
HashJoinOperator.processElement2(StreamRecord<RowData> element) |
void |
SortMergeJoinOperator.processElement2(StreamRecord<RowData> element) |
Modifier and Type | Field and Description |
---|---|
protected TableFunctionCollector<RowData> |
LookupJoinRunner.collector |
Modifier and Type | Method and Description |
---|---|
TableFunctionResultFuture<RowData> |
AsyncLookupJoinWithCalcRunner.createFetcherResultFuture(Configuration parameters) |
TableFunctionResultFuture<RowData> |
AsyncLookupJoinRunner.createFetcherResultFuture(Configuration parameters) |
Collector<RowData> |
LookupJoinRunner.getFetcherCollector() |
Collector<RowData> |
LookupJoinWithCalcRunner.getFetcherCollector() |
Modifier and Type | Method and Description |
---|---|
void |
AsyncLookupJoinRunner.asyncInvoke(RowData input,
ResultFuture<RowData> resultFuture) |
void |
LookupJoinRunner.processElement(RowData in,
ProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
void |
AsyncLookupJoinRunner.asyncInvoke(RowData input,
ResultFuture<RowData> resultFuture) |
void |
LookupJoinRunner.processElement(RowData in,
ProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Field and Description |
---|---|
RowData |
AbstractStreamingJoinOperator.OuterRecord.record |
Modifier and Type | Field and Description |
---|---|
protected TimestampedCollector<RowData> |
AbstractStreamingJoinOperator.collector |
protected InternalTypeInfo<RowData> |
AbstractStreamingJoinOperator.leftType |
protected InternalTypeInfo<RowData> |
AbstractStreamingJoinOperator.rightType |
Modifier and Type | Method and Description |
---|---|
Iterable<RowData> |
AbstractStreamingJoinOperator.AssociatedRecords.getRecords()
Gets the iterable of records.
|
Modifier and Type | Method and Description |
---|---|
static AbstractStreamingJoinOperator.AssociatedRecords |
AbstractStreamingJoinOperator.AssociatedRecords.of(RowData input,
boolean inputIsLeft,
JoinRecordStateView otherSideStateView,
JoinCondition condition)
Creates an
AbstractStreamingJoinOperator.AssociatedRecords which represents the records associated to the input
row. |
Modifier and Type | Method and Description |
---|---|
void |
StreamingSemiAntiJoinOperator.processElement1(StreamRecord<RowData> element)
Process an input element and output incremental joined records, retraction messages will be
sent in some scenarios.
|
void |
StreamingJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
StreamingSemiAntiJoinOperator.processElement2(StreamRecord<RowData> element)
Process an input element and output incremental joined records, retraction messages will be
sent in some scenarios.
|
void |
StreamingJoinOperator.processElement2(StreamRecord<RowData> element) |
Constructor and Description |
---|
AbstractStreamingJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean[] filterNullKeys,
long stateRetentionTime) |
AbstractStreamingJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean[] filterNullKeys,
long stateRetentionTime) |
StreamingJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean leftIsOuter,
boolean rightIsOuter,
boolean[] filterNullKeys,
long stateRetentionTime) |
StreamingJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean leftIsOuter,
boolean rightIsOuter,
boolean[] filterNullKeys,
long stateRetentionTime) |
StreamingSemiAntiJoinOperator(boolean isAntiJoin,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean[] filterNullKeys,
long stateRetentionTime) |
StreamingSemiAntiJoinOperator(boolean isAntiJoin,
InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
JoinInputSideSpec leftInputSideSpec,
JoinInputSideSpec rightInputSideSpec,
boolean[] filterNullKeys,
long stateRetentionTime) |
Modifier and Type | Method and Description |
---|---|
Iterable<RowData> |
JoinRecordStateView.getRecords()
Gets all the records under the current context (i.e.
|
Iterable<Tuple2<RowData,Integer>> |
OuterJoinRecordStateView.getRecordsAndNumOfAssociations()
Gets all the records and number of associations under the current context (i.e.
|
KeySelector<RowData,RowData> |
JoinInputSideSpec.getUniqueKeySelector()
Returns the
KeySelector to extract unique key from the input row. |
KeySelector<RowData,RowData> |
JoinInputSideSpec.getUniqueKeySelector()
Returns the
KeySelector to extract unique key from the input row. |
InternalTypeInfo<RowData> |
JoinInputSideSpec.getUniqueKeyType()
Returns the
TypeInformation of the unique key. |
Modifier and Type | Method and Description |
---|---|
void |
JoinRecordStateView.addRecord(RowData record)
Add a new record to the state view.
|
void |
OuterJoinRecordStateView.addRecord(RowData record,
int numOfAssociations)
Adds a new record with the number of associations to the state view.
|
void |
JoinRecordStateView.retractRecord(RowData record)
Retract the record from the state view.
|
void |
OuterJoinRecordStateView.updateNumOfAssociations(RowData record,
int numOfAssociations)
Updates the number of associations belongs to the record.
|
Modifier and Type | Method and Description |
---|---|
static JoinRecordStateView |
JoinRecordStateViews.create(RuntimeContext ctx,
String stateName,
JoinInputSideSpec inputSideSpec,
InternalTypeInfo<RowData> recordType,
long retentionTime)
Creates a
JoinRecordStateView depends on JoinInputSideSpec . |
static OuterJoinRecordStateView |
OuterJoinRecordStateViews.create(RuntimeContext ctx,
String stateName,
JoinInputSideSpec inputSideSpec,
InternalTypeInfo<RowData> recordType,
long retentionTime)
Creates a
OuterJoinRecordStateView depends on JoinInputSideSpec . |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that the input has an unique key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that the input has an unique key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that the input has an unique key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKeyContainedByJoinKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that input has an unique key and the unique key is
contained by the join key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKeyContainedByJoinKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that input has an unique key and the unique key is
contained by the join key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKeyContainedByJoinKey(InternalTypeInfo<RowData> uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that input has an unique key and the unique key is
contained by the join key. |
Modifier and Type | Method and Description |
---|---|
void |
TemporalRowTimeJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
TemporalProcessTimeJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
TemporalRowTimeJoinOperator.processElement2(StreamRecord<RowData> element) |
void |
TemporalProcessTimeJoinOperator.processElement2(StreamRecord<RowData> element) |
Constructor and Description |
---|
TemporalProcessTimeJoinOperator(InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
long minRetentionTime,
long maxRetentionTime,
boolean isLeftOuterJoin) |
TemporalRowTimeJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
int leftTimeAttribute,
int rightTimeAttribute,
long minRetentionTime,
long maxRetentionTime,
boolean isLeftOuterJoin) |
TemporalRowTimeJoinOperator(InternalTypeInfo<RowData> leftType,
InternalTypeInfo<RowData> rightType,
GeneratedJoinCondition generatedJoinCondition,
int leftTimeAttribute,
int rightTimeAttribute,
long minRetentionTime,
long maxRetentionTime,
boolean isLeftOuterJoin) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<RowData> |
RowtimeProcessFunction.getProducedType() |
Modifier and Type | Method and Description |
---|---|
int |
RowDataEventComparator.compare(RowData row1,
RowData row2) |
boolean |
IterativeConditionRunner.filter(RowData value,
IterativeCondition.Context<RowData> ctx) |
void |
RowtimeProcessFunction.processElement(RowData value,
ProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
boolean |
IterativeConditionRunner.filter(RowData value,
IterativeCondition.Context<RowData> ctx) |
void |
RowtimeProcessFunction.processElement(RowData value,
ProcessFunction.Context ctx,
Collector<RowData> out) |
void |
PatternProcessFunctionRunner.processMatch(Map<String,List<RowData>> match,
PatternProcessFunction.Context ctx,
Collector<RowData> out) |
void |
PatternProcessFunctionRunner.processMatch(Map<String,List<RowData>> match,
PatternProcessFunction.Context ctx,
Collector<RowData> out) |
Constructor and Description |
---|
IterativeConditionRunner(GeneratedFunction<RichIterativeCondition<RowData>> generatedFunction) |
PatternProcessFunctionRunner(GeneratedFunction<PatternProcessFunction<RowData,RowData>> generatedFunction) |
PatternProcessFunctionRunner(GeneratedFunction<PatternProcessFunction<RowData,RowData>> generatedFunction) |
RowtimeProcessFunction(int rowtimeIdx,
TypeInformation<RowData> returnType,
int precision) |
Modifier and Type | Method and Description |
---|---|
boolean |
DropUpdateBeforeFunction.filter(RowData value) |
Modifier and Type | Class and Description |
---|---|
class |
TableOperatorWrapper<OP extends StreamOperator<RowData>>
This class handles the close, endInput and other related logic of a
StreamOperator . |
Modifier and Type | Method and Description |
---|---|
<T extends StreamOperator<RowData>> |
BatchMultipleInputStreamOperatorFactory.createStreamOperator(StreamOperatorParameters<RowData> parameters) |
Modifier and Type | Method and Description |
---|---|
void |
TableOperatorWrapper.createOperator(StreamOperatorParameters<RowData> parameters) |
protected StreamConfig |
BatchMultipleInputStreamOperator.createStreamConfig(StreamOperatorParameters<RowData> multipleInputOperatorParameters,
TableOperatorWrapper<?> wrapper) |
protected StreamConfig |
MultipleInputStreamOperatorBase.createStreamConfig(StreamOperatorParameters<RowData> multipleInputOperatorParameters,
TableOperatorWrapper<?> wrapper) |
<T extends StreamOperator<RowData>> |
BatchMultipleInputStreamOperatorFactory.createStreamOperator(StreamOperatorParameters<RowData> parameters) |
Constructor and Description |
---|
BatchMultipleInputStreamOperator(StreamOperatorParameters<RowData> parameters,
List<InputSpec> inputSpecs,
List<TableOperatorWrapper<?>> headWrapper,
TableOperatorWrapper<?> tailWrapper) |
MultipleInputStreamOperatorBase(StreamOperatorParameters<RowData> parameters,
List<InputSpec> inputSpecs,
List<TableOperatorWrapper<?>> headWrappers,
TableOperatorWrapper<?> tailWrapper) |
TableOperatorWrapper(StreamOperatorFactory<RowData> factory,
String operatorName,
List<TypeInformation<?>> allInputTypes,
TypeInformation<?> outputType) |
Modifier and Type | Method and Description |
---|---|
void |
SecondInputOfTwoInput.processElement(StreamRecord<RowData> element) |
void |
OneInput.processElement(StreamRecord<RowData> element) |
void |
FirstInputOfTwoInput.processElement(StreamRecord<RowData> element) |
void |
InputBase.setKeyContextElement(StreamRecord<RowData> record) |
Constructor and Description |
---|
FirstInputOfTwoInput(TwoInputStreamOperator<RowData,RowData,RowData> operator) |
FirstInputOfTwoInput(TwoInputStreamOperator<RowData,RowData,RowData> operator) |
FirstInputOfTwoInput(TwoInputStreamOperator<RowData,RowData,RowData> operator) |
OneInput(OneInputStreamOperator<RowData,RowData> operator) |
OneInput(OneInputStreamOperator<RowData,RowData> operator) |
SecondInputOfTwoInput(TwoInputStreamOperator<RowData,RowData,RowData> operator) |
SecondInputOfTwoInput(TwoInputStreamOperator<RowData,RowData,RowData> operator) |
SecondInputOfTwoInput(TwoInputStreamOperator<RowData,RowData,RowData> operator) |
Modifier and Type | Method and Description |
---|---|
void |
CopyingSecondInputOfTwoInputStreamOperatorOutput.collect(StreamRecord<RowData> record) |
void |
OneInputStreamOperatorOutput.collect(StreamRecord<RowData> record) |
void |
CopyingBroadcastingOutput.collect(StreamRecord<RowData> record) |
void |
BroadcastingOutput.collect(StreamRecord<RowData> record) |
void |
SecondInputOfTwoInputStreamOperatorOutput.collect(StreamRecord<RowData> record) |
void |
FirstInputOfTwoInputStreamOperatorOutput.collect(StreamRecord<RowData> record) |
Modifier and Type | Method and Description |
---|---|
void |
ProcTimeUnboundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeRangeBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeRowsBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
RowTimeRowsBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
RowTimeRangeBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
AbstractRowTimeUnboundedPrecedingOver.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out)
Puts an element from the input stream into state if it is not late.
|
Modifier and Type | Method and Description |
---|---|
void |
ProcTimeUnboundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
ProcTimeRangeBoundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
ProcTimeRowsBoundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
RowTimeRowsBoundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
RowTimeRangeBoundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
AbstractRowTimeUnboundedPrecedingOver.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
ProcTimeUnboundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeRangeBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeRowsBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
RowTimeRowsBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
RowTimeRangeBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
AbstractRowTimeUnboundedPrecedingOver.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out)
Puts an element from the input stream into state if it is not late.
|
void |
BufferDataOverWindowOperator.processElement(StreamRecord<RowData> element) |
void |
NonBufferOverWindowOperator.processElement(StreamRecord<RowData> element) |
void |
RowTimeRowsUnboundedPrecedingFunction.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out) |
void |
RowTimeRowsUnboundedPrecedingFunction.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out) |
protected abstract void |
AbstractRowTimeUnboundedPrecedingOver.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out)
Process the same timestamp datas, the mechanism is different between rows and range window.
|
protected abstract void |
AbstractRowTimeUnboundedPrecedingOver.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out)
Process the same timestamp datas, the mechanism is different between rows and range window.
|
void |
RowTimeRangeUnboundedPrecedingFunction.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out) |
void |
RowTimeRangeUnboundedPrecedingFunction.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
RowData |
OverWindowFrame.process(int index,
RowData current)
return the ACC of the window frame.
|
RowData |
RowSlidingOverFrame.process(int index,
RowData current) |
RowData |
RowUnboundedFollowingOverFrame.process(int index,
RowData current) |
RowData |
RangeUnboundedPrecedingOverFrame.process(int index,
RowData current) |
RowData |
RangeUnboundedFollowingOverFrame.process(int index,
RowData current) |
RowData |
OffsetOverFrame.process(int index,
RowData current) |
RowData |
RowUnboundedPrecedingOverFrame.process(int index,
RowData current) |
RowData |
UnboundedOverWindowFrame.process(int index,
RowData current) |
RowData |
InsensitiveOverFrame.process(int index,
RowData current) |
RowData |
RangeSlidingOverFrame.process(int index,
RowData current) |
Modifier and Type | Method and Description |
---|---|
long |
OffsetOverFrame.CalcOffsetFunc.calc(RowData row) |
RowData |
OverWindowFrame.process(int index,
RowData current)
return the ACC of the window frame.
|
RowData |
RowSlidingOverFrame.process(int index,
RowData current) |
RowData |
RowUnboundedFollowingOverFrame.process(int index,
RowData current) |
RowData |
RangeUnboundedPrecedingOverFrame.process(int index,
RowData current) |
RowData |
RangeUnboundedFollowingOverFrame.process(int index,
RowData current) |
RowData |
OffsetOverFrame.process(int index,
RowData current) |
RowData |
RowUnboundedPrecedingOverFrame.process(int index,
RowData current) |
RowData |
UnboundedOverWindowFrame.process(int index,
RowData current) |
RowData |
InsensitiveOverFrame.process(int index,
RowData current) |
RowData |
RangeSlidingOverFrame.process(int index,
RowData current) |
Modifier and Type | Field and Description |
---|---|
protected TypeSerializer<RowData> |
PythonStreamGroupAggregateOperator.udfInputTypeSerializer
The TypeSerializer for udf input elements.
|
protected TypeSerializer<RowData> |
PythonStreamGroupAggregateOperator.udfOutputTypeSerializer
The TypeSerializer for udf execution results.
|
Modifier and Type | Method and Description |
---|---|
void |
PythonStreamGroupAggregateOperator.onEventTime(InternalTimer<RowData,VoidNamespace> timer)
Invoked when an event-time timer fires.
|
void |
PythonStreamGroupAggregateOperator.onProcessingTime(InternalTimer<RowData,VoidNamespace> timer)
Invoked when a processing-time timer fires.
|
void |
PythonStreamGroupAggregateOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Field and Description |
---|---|
protected ArrowSerializer<RowData> |
AbstractArrowPythonAggregateFunctionOperator.arrowSerializer |
Modifier and Type | Method and Description |
---|---|
RowData |
AbstractArrowPythonAggregateFunctionOperator.getFunctionInput(RowData element) |
Modifier and Type | Method and Description |
---|---|
RowData |
AbstractArrowPythonAggregateFunctionOperator.getFunctionInput(RowData element) |
Modifier and Type | Method and Description |
---|---|
void |
AbstractArrowPythonAggregateFunctionOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
void |
BatchArrowPythonGroupAggregateFunctionOperator.bufferInput(RowData input) |
void |
BatchArrowPythonOverWindowAggregateFunctionOperator.bufferInput(RowData input) |
void |
BatchArrowPythonGroupWindowAggregateFunctionOperator.bufferInput(RowData input) |
void |
BatchArrowPythonGroupAggregateFunctionOperator.processElementInternal(RowData value) |
void |
BatchArrowPythonOverWindowAggregateFunctionOperator.processElementInternal(RowData value) |
void |
BatchArrowPythonGroupWindowAggregateFunctionOperator.processElementInternal(RowData value) |
Modifier and Type | Method and Description |
---|---|
void |
StreamArrowPythonProcTimeBoundedRangeOperator.bufferInput(RowData input) |
void |
StreamArrowPythonProcTimeBoundedRowsOperator.bufferInput(RowData input) |
void |
StreamArrowPythonRowTimeBoundedRowsOperator.bufferInput(RowData input) |
void |
StreamArrowPythonGroupWindowAggregateFunctionOperator.bufferInput(RowData input) |
void |
StreamArrowPythonRowTimeBoundedRangeOperator.bufferInput(RowData input) |
void |
StreamArrowPythonProcTimeBoundedRowsOperator.processElementInternal(RowData value) |
void |
StreamArrowPythonGroupWindowAggregateFunctionOperator.processElementInternal(RowData value) |
void |
AbstractStreamArrowPythonOverWindowAggregateFunctionOperator.processElementInternal(RowData value) |
Modifier and Type | Method and Description |
---|---|
RowData |
AbstractRowDataPythonScalarFunctionOperator.getFunctionInput(RowData element) |
Modifier and Type | Method and Description |
---|---|
void |
AbstractRowDataPythonScalarFunctionOperator.bufferInput(RowData input) |
RowData |
AbstractRowDataPythonScalarFunctionOperator.getFunctionInput(RowData element) |
void |
RowDataPythonScalarFunctionOperator.processElementInternal(RowData value) |
Modifier and Type | Method and Description |
---|---|
void |
RowDataArrowPythonScalarFunctionOperator.processElementInternal(RowData value) |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataPythonTableFunctionOperator.getFunctionInput(RowData element) |
Modifier and Type | Method and Description |
---|---|
void |
RowDataPythonTableFunctionOperator.bufferInput(RowData input) |
RowData |
RowDataPythonTableFunctionOperator.getFunctionInput(RowData element) |
void |
RowDataPythonTableFunctionOperator.processElementInternal(RowData value) |
Modifier and Type | Method and Description |
---|---|
void |
StreamRecordRowDataWrappingCollector.collect(RowData record) |
Constructor and Description |
---|
StreamRecordRowDataWrappingCollector(Collector<StreamRecord<RowData>> out) |
Modifier and Type | Field and Description |
---|---|
protected InternalTypeInfo<RowData> |
AbstractTopNFunction.inputRowType |
protected Comparator<RowData> |
AbstractTopNFunction.sortKeyComparator |
protected KeySelector<RowData,RowData> |
AbstractTopNFunction.sortKeySelector |
protected KeySelector<RowData,RowData> |
AbstractTopNFunction.sortKeySelector |
Modifier and Type | Method and Description |
---|---|
protected boolean |
AbstractTopNFunction.checkSortKeyInBufferRange(RowData sortKey,
org.apache.flink.table.runtime.operators.rank.TopNBuffer buffer)
Checks whether the record should be put into the buffer.
|
protected void |
AbstractTopNFunction.collectDelete(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectDelete(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectInsert(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectInsert(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectUpdateAfter(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectUpdateAfter(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectUpdateBefore(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectUpdateBefore(Collector<RowData> out,
RowData inputRow,
long rank) |
int |
ComparableRecordComparator.compare(RowData o1,
RowData o2) |
protected long |
AbstractTopNFunction.initRankEnd(RowData row)
Initialize rank end.
|
void |
UpdatableTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
void |
RetractableTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
AppendOnlyTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
protected void |
AbstractTopNFunction.collectDelete(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectDelete(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectInsert(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectInsert(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectUpdateAfter(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectUpdateAfter(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectUpdateBefore(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectUpdateBefore(Collector<RowData> out,
RowData inputRow,
long rank) |
void |
UpdatableTopNFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
RetractableTopNFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
AppendOnlyTopNFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
UpdatableTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
void |
RetractableTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
AppendOnlyTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
Constructor and Description |
---|
AppendOnlyTopNFunction(long minRetentionTime,
long maxRetentionTime,
InternalTypeInfo<RowData> inputRowType,
GeneratedRecordComparator sortKeyGeneratedRecordComparator,
RowDataKeySelector sortKeySelector,
RankType rankType,
RankRange rankRange,
boolean generateUpdateBefore,
boolean outputRankNumber,
long cacheSize) |
RetractableTopNFunction(long minRetentionTime,
long maxRetentionTime,
InternalTypeInfo<RowData> inputRowType,
ComparableRecordComparator comparableRecordComparator,
RowDataKeySelector sortKeySelector,
RankType rankType,
RankRange rankRange,
GeneratedRecordEqualiser generatedEqualiser,
boolean generateUpdateBefore,
boolean outputRankNumber) |
UpdatableTopNFunction(long minRetentionTime,
long maxRetentionTime,
InternalTypeInfo<RowData> inputRowType,
RowDataKeySelector rowKeySelector,
GeneratedRecordComparator generatedRecordComparator,
RowDataKeySelector sortKeySelector,
RankType rankType,
RankRange rankRange,
boolean generateUpdateBefore,
boolean outputRankNumber,
long cacheSize) |
Modifier and Type | Method and Description |
---|---|
boolean |
SinkNotNullEnforcer.filter(RowData row) |
Modifier and Type | Method and Description |
---|---|
void |
SinkOperator.processElement(StreamRecord<RowData> element) |
Constructor and Description |
---|
SinkOperator(SinkFunction<RowData> sinkFunction,
int rowtimeFieldIndex,
SinkNotNullEnforcer enforcer) |
Modifier and Type | Method and Description |
---|---|
boolean |
BinaryInMemorySortBuffer.write(RowData record)
Writes a given record to this sort buffer.
|
void |
BinaryExternalSorter.write(RowData current) |
protected void |
BinaryIndexedSortable.writeIndexAndNormalizedKey(RowData record,
long currOffset)
Write of index and normalizedKey.
|
Modifier and Type | Method and Description |
---|---|
static BinaryInMemorySortBuffer |
BinaryInMemorySortBuffer.createBuffer(NormalizedKeyComputer normalizedKeyComputer,
AbstractRowDataSerializer<RowData> inputSerializer,
BinaryRowDataSerializer serializer,
RecordComparator comparator,
MemorySegmentPool memoryPool)
Create a memory sorter in `insert` way.
|
void |
RowTimeSortOperator.onEventTime(InternalTimer<RowData,VoidNamespace> timer) |
void |
ProcTimeSortOperator.onEventTime(InternalTimer<RowData,VoidNamespace> timer) |
void |
RowTimeSortOperator.onProcessingTime(InternalTimer<RowData,VoidNamespace> timer) |
void |
ProcTimeSortOperator.onProcessingTime(InternalTimer<RowData,VoidNamespace> timer) |
void |
StreamSortOperator.processElement(StreamRecord<RowData> element) |
void |
RowTimeSortOperator.processElement(StreamRecord<RowData> element) |
void |
SortLimitOperator.processElement(StreamRecord<RowData> element) |
void |
ProcTimeSortOperator.processElement(StreamRecord<RowData> element) |
void |
SortOperator.processElement(StreamRecord<RowData> element) |
void |
RankOperator.processElement(StreamRecord<RowData> element) |
void |
LimitOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
RowData |
ValuesInputFormat.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
InternalTypeInfo<RowData> |
ValuesInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
RowData |
ValuesInputFormat.nextRecord(RowData reuse) |
Constructor and Description |
---|
ValuesInputFormat(GeneratedInput<GenericInputFormat<RowData>> generatedInput,
InternalTypeInfo<RowData> returnType) |
ValuesInputFormat(GeneratedInput<GenericInputFormat<RowData>> generatedInput,
InternalTypeInfo<RowData> returnType) |
Modifier and Type | Field and Description |
---|---|
protected TimestampedCollector<RowData> |
WindowOperator.collector
This is used for emitting elements with a given timestamp.
|
protected InternalValueState<K,W,RowData> |
WindowOperator.previousState |
Modifier and Type | Method and Description |
---|---|
void |
WindowOperator.processElement(StreamRecord<RowData> record) |
Modifier and Type | Method and Description |
---|---|
Collection<TimeWindow> |
CumulativeWindowAssigner.assignWindows(RowData element,
long timestamp) |
Collection<TimeWindow> |
SessionWindowAssigner.assignWindows(RowData element,
long timestamp) |
Collection<TimeWindow> |
SlidingWindowAssigner.assignWindows(RowData element,
long timestamp) |
abstract Collection<W> |
WindowAssigner.assignWindows(RowData element,
long timestamp)
Given the timestamp and element, returns the set of windows into which it should be placed.
|
Collection<CountWindow> |
CountTumblingWindowAssigner.assignWindows(RowData element,
long timestamp) |
Collection<TimeWindow> |
TumblingWindowAssigner.assignWindows(RowData element,
long timestamp) |
Collection<CountWindow> |
CountSlidingWindowAssigner.assignWindows(RowData element,
long timestamp) |
Modifier and Type | Method and Description |
---|---|
RowData |
InternalWindowProcessFunction.Context.getWindowAccumulators(W window)
Gets the accumulators of the given window.
|
Modifier and Type | Method and Description |
---|---|
Collection<W> |
GeneralWindowProcessFunction.assignActualWindows(RowData inputRow,
long timestamp) |
abstract Collection<W> |
InternalWindowProcessFunction.assignActualWindows(RowData inputRow,
long timestamp)
Assigns the input element into the actual windows which the
Trigger should trigger
on. |
Collection<W> |
PanedWindowProcessFunction.assignActualWindows(RowData inputRow,
long timestamp) |
Collection<W> |
MergingWindowProcessFunction.assignActualWindows(RowData inputRow,
long timestamp) |
Collection<W> |
GeneralWindowProcessFunction.assignStateNamespace(RowData inputRow,
long timestamp) |
abstract Collection<W> |
InternalWindowProcessFunction.assignStateNamespace(RowData inputRow,
long timestamp)
Assigns the input element into the state namespace which the input element should be
accumulated/retracted into.
|
Collection<W> |
PanedWindowProcessFunction.assignStateNamespace(RowData inputRow,
long timestamp) |
Collection<W> |
MergingWindowProcessFunction.assignStateNamespace(RowData inputRow,
long timestamp) |
void |
InternalWindowProcessFunction.Context.setWindowAccumulators(W window,
RowData acc)
Sets the accumulators of the given window.
|
Modifier and Type | Method and Description |
---|---|
Long |
BoundedOutOfOrderWatermarkGenerator.currentWatermark(RowData row) |
Modifier and Type | Method and Description |
---|---|
void |
RowTimeMiniBatchAssginerOperator.processElement(StreamRecord<RowData> element) |
void |
ProcTimeMiniBatchAssignerOperator.processElement(StreamRecord<RowData> element) |
void |
WatermarkAssignerOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
StreamPartitioner<RowData> |
BinaryHashPartitioner.copy() |
Modifier and Type | Method and Description |
---|---|
int |
BinaryHashPartitioner.selectChannel(SerializationDelegate<StreamRecord<RowData>> record) |
Modifier and Type | Class and Description |
---|---|
class |
AbstractRowDataSerializer<T extends RowData>
Row serializer, provided paged serialize paged method.
|
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataSerializer.copy(RowData from) |
RowData |
RowDataSerializer.copy(RowData from,
RowData reuse) |
RowData |
RowDataSerializer.createInstance() |
RowData |
RowDataSerializer.deserialize(DataInputView source) |
RowData |
RowDataSerializer.deserialize(RowData reuse,
DataInputView source) |
RowData |
RowDataSerializer.deserializeFromPages(AbstractPagedInputView source) |
RowData |
RowDataSerializer.deserializeFromPages(RowData reuse,
AbstractPagedInputView source) |
RowData |
RowDataSerializer.mapFromPages(AbstractPagedInputView source) |
RowData |
RowDataSerializer.mapFromPages(RowData reuse,
AbstractPagedInputView source) |
Modifier and Type | Method and Description |
---|---|
TypeSerializer<RowData> |
RowDataSerializer.duplicate() |
static InternalTypeInfo<RowData> |
InternalTypeInfo.of(RowType type)
Creates type information for a
RowType represented by internal data structures. |
static InternalTypeInfo<RowData> |
InternalTypeInfo.ofFields(LogicalType... fieldTypes)
Creates type information for
RowType represented by internal data structures. |
static InternalTypeInfo<RowData> |
InternalTypeInfo.ofFields(LogicalType[] fieldTypes,
String[] fieldNames)
Creates type information for
RowType represented by internal data structures. |
TypeSerializerSchemaCompatibility<RowData> |
RowDataSerializer.RowDataSerializerSnapshot.resolveSchemaCompatibility(TypeSerializer<RowData> newSerializer) |
TypeSerializerSnapshot<RowData> |
RowDataSerializer.snapshotConfiguration() |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataSerializer.copy(RowData from) |
RowData |
RowDataSerializer.copy(RowData from,
RowData reuse) |
RowData |
RowDataSerializer.deserialize(RowData reuse,
DataInputView source) |
RowData |
RowDataSerializer.deserializeFromPages(RowData reuse,
AbstractPagedInputView source) |
RowData |
RowDataSerializer.mapFromPages(RowData reuse,
AbstractPagedInputView source) |
void |
RowDataSerializer.serialize(RowData row,
DataOutputView target) |
int |
RowDataSerializer.serializeToPages(RowData row,
AbstractPagedOutputView target) |
BinaryRowData |
RowDataSerializer.toBinaryRow(RowData row)
Convert
RowData into BinaryRowData . |
Modifier and Type | Method and Description |
---|---|
TypeSerializerSchemaCompatibility<RowData> |
RowDataSerializer.RowDataSerializerSnapshot.resolveSchemaCompatibility(TypeSerializer<RowData> newSerializer) |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataSerializer.deserialize(DataInputView source) |
RowData |
RowDataSerializer.deserialize(RowData reuse,
DataInputView source) |
Modifier and Type | Method and Description |
---|---|
TypeSerializerSchemaCompatibility<RowData> |
RowDataSerializer.RowDataSerializerSnapshot.resolveSchemaCompatibility(TypeSerializer<RowData> newSerializer) |
TypeSerializerSnapshot<RowData> |
RowDataSerializer.snapshotConfiguration() |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataSerializer.deserialize(RowData reuse,
DataInputView source) |
void |
RowDataSerializer.serialize(RowData row,
DataOutputView target) |
Modifier and Type | Method and Description |
---|---|
TypeSerializerSchemaCompatibility<RowData> |
RowDataSerializer.RowDataSerializerSnapshot.resolveSchemaCompatibility(TypeSerializer<RowData> newSerializer) |
Modifier and Type | Interface and Description |
---|---|
interface |
RowIterator<T extends RowData>
An internal iterator interface which presents a more restrictive API than
Iterator . |
Modifier and Type | Method and Description |
---|---|
void |
ResettableExternalBuffer.add(RowData row) |
void |
ResettableRowBuffer.add(RowData row)
Appends the specified row to the end of this buffer.
|
Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.