Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.hbase.client.Mutation |
RowDataToMutationConverter.convertToMutation(RowData record) |
Modifier and Type | Method and Description |
---|---|
protected RowData |
HBaseRowDataInputFormat.mapResultToOutType(org.apache.hadoop.hbase.client.Result res) |
Modifier and Type | Method and Description |
---|---|
RowData |
HBaseSerde.convertToRow(org.apache.hadoop.hbase.client.Result result)
Converts HBase
Result into RowData . |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.hbase.client.Delete |
HBaseSerde.createDeleteMutation(RowData row)
Returns an instance of Delete that remove record from HBase table.
|
org.apache.hadoop.hbase.client.Put |
HBaseSerde.createPutMutation(RowData row)
Returns an instance of Put that writes record to HBase table.
|
Modifier and Type | Method and Description |
---|---|
RowData |
AbstractJdbcRowConverter.toInternal(ResultSet resultSet) |
RowData |
JdbcRowConverter.toInternal(ResultSet resultSet)
|
Modifier and Type | Method and Description |
---|---|
PreparedStatement |
AbstractJdbcRowConverter.toExternal(RowData rowData,
PreparedStatement statement) |
PreparedStatement |
JdbcRowConverter.toExternal(RowData rowData,
PreparedStatement statement)
Convert data retrieved from Flink internal RowData to JDBC Object.
|
Modifier and Type | Method and Description |
---|---|
void |
BufferReduceStatementExecutor.addToBatch(RowData record) |
Modifier and Type | Method and Description |
---|---|
RowData |
JdbcRowDataInputFormat.nextRecord(RowData reuse)
Stores the next resultSet row in a tuple.
|
Modifier and Type | Method and Description |
---|---|
JdbcBatchingOutputFormat<RowData,?,?> |
JdbcDynamicOutputFormatBuilder.build() |
TypeInformation<RowData> |
JdbcRowDataInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
RowData |
JdbcRowDataInputFormat.nextRecord(RowData reuse)
Stores the next resultSet row in a tuple.
|
Modifier and Type | Method and Description |
---|---|
JdbcDynamicOutputFormatBuilder |
JdbcDynamicOutputFormatBuilder.setRowDataTypeInfo(TypeInformation<RowData> rowDataTypeInfo) |
JdbcRowDataInputFormat.Builder |
JdbcRowDataInputFormat.Builder.setRowDataTypeInfo(TypeInformation<RowData> rowDataTypeInfo) |
Modifier and Type | Method and Description |
---|---|
TableSource<RowData> |
HiveTableSource.applyLimit(long limit) |
TableSource<RowData> |
HiveTableSource.applyPartitionPruning(List<Map<String,String>> remainingPartitions) |
TableSource<RowData> |
HiveTableFactory.createTableSource(TableSourceFactory.Context context) |
AsyncTableFunction<RowData> |
HiveTableSource.getAsyncLookupFunction(String[] lookupKeys) |
DataStream<RowData> |
HiveTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
TableFunction<RowData> |
HiveTableSource.getLookupFunction(String[] lookupKeys) |
TableSource<RowData> |
HiveTableSource.projectFields(int[] fields) |
Modifier and Type | Method and Description |
---|---|
LinkedHashMap<String,String> |
HiveRowDataPartitionComputer.generatePartValues(RowData in) |
Modifier and Type | Method and Description |
---|---|
RowData |
HiveMapredSplitReader.nextRecord(RowData reuse) |
RowData |
SplitReader.nextRecord(RowData reuse)
Reads the next record from the input.
|
RowData |
HiveVectorizedOrcSplitReader.nextRecord(RowData reuse) |
RowData |
HiveTableInputFormat.nextRecord(RowData reuse) |
RowData |
HiveTableFileInputFormat.nextRecord(RowData reuse) |
RowData |
HiveVectorizedParquetSplitReader.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
RowData |
HiveMapredSplitReader.nextRecord(RowData reuse) |
RowData |
SplitReader.nextRecord(RowData reuse)
Reads the next record from the input.
|
RowData |
HiveVectorizedOrcSplitReader.nextRecord(RowData reuse) |
RowData |
HiveTableInputFormat.nextRecord(RowData reuse) |
RowData |
HiveTableFileInputFormat.nextRecord(RowData reuse) |
RowData |
HiveVectorizedParquetSplitReader.nextRecord(RowData reuse) |
default void |
SplitReader.seekToRow(long rowCount,
RowData reuse)
Seek to a particular row number.
|
void |
HiveVectorizedOrcSplitReader.seekToRow(long rowCount,
RowData reuse) |
void |
HiveVectorizedParquetSplitReader.seekToRow(long rowCount,
RowData reuse) |
Modifier and Type | Method and Description |
---|---|
HadoopPathBasedBulkWriter<RowData> |
HiveBulkWriterFactory.create(org.apache.hadoop.fs.Path targetPath,
org.apache.hadoop.fs.Path inProgressPath) |
java.util.function.Function<RowData,org.apache.hadoop.io.Writable> |
HiveWriterFactory.createRowDataConverter() |
Modifier and Type | Method and Description |
---|---|
RowData |
AvroRowDataDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
Optional<BulkWriter.Factory<RowData>> |
AvroFileSystemFormatFactory.createBulkWriterFactory(FileSystemFormatFactory.WriterContext context) |
DecodingFormat<DeserializationSchema<RowData>> |
AvroFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
Optional<Encoder<RowData>> |
AvroFileSystemFormatFactory.createEncoder(FileSystemFormatFactory.WriterContext context) |
EncodingFormat<SerializationSchema<RowData>> |
AvroFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
InputFormat<RowData,?> |
AvroFileSystemFormatFactory.createReader(FileSystemFormatFactory.ReaderContext context) |
TypeInformation<RowData> |
AvroRowDataDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
AvroRowDataDeserializationSchema.isEndOfStream(RowData nextElement) |
byte[] |
AvroRowDataSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
AvroRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> typeInfo)
Creates a Avro deserialization schema for the given logical type.
|
Modifier and Type | Method and Description |
---|---|
RowData |
CsvRowDataDeserializationSchema.deserialize(byte[] message) |
RowData |
CsvFileSystemFormatFactory.CsvInputFormat.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
Optional<BulkWriter.Factory<RowData>> |
CsvFileSystemFormatFactory.createBulkWriterFactory(FileSystemFormatFactory.WriterContext context) |
DecodingFormat<DeserializationSchema<RowData>> |
CsvFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
Optional<Encoder<RowData>> |
CsvFileSystemFormatFactory.createEncoder(FileSystemFormatFactory.WriterContext context) |
EncodingFormat<SerializationSchema<RowData>> |
CsvFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
InputFormat<RowData,?> |
CsvFileSystemFormatFactory.createReader(FileSystemFormatFactory.ReaderContext context) |
TypeInformation<RowData> |
CsvRowDataDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
CsvRowDataDeserializationSchema.isEndOfStream(RowData nextElement) |
RowData |
CsvFileSystemFormatFactory.CsvInputFormat.nextRecord(RowData reuse) |
byte[] |
CsvRowDataSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
Builder(RowType rowType,
TypeInformation<RowData> resultTypeInfo)
Creates a CSV deserialization schema for the given
TypeInformation with optional
parameters. |
Modifier and Type | Method and Description |
---|---|
RowData |
JsonRowDataDeserializationSchema.deserialize(byte[] message) |
RowData |
JsonFileSystemFormatFactory.JsonInputFormat.nextRecord(RowData record) |
RowData |
JsonFileSystemFormatFactory.JsonInputFormat.readRecord(RowData reuse,
byte[] bytes,
int offset,
int numBytes) |
Modifier and Type | Method and Description |
---|---|
Optional<BulkWriter.Factory<RowData>> |
JsonFileSystemFormatFactory.createBulkWriterFactory(FileSystemFormatFactory.WriterContext context) |
DecodingFormat<DeserializationSchema<RowData>> |
JsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
Optional<Encoder<RowData>> |
JsonFileSystemFormatFactory.createEncoder(FileSystemFormatFactory.WriterContext context) |
EncodingFormat<SerializationSchema<RowData>> |
JsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
InputFormat<RowData,?> |
JsonFileSystemFormatFactory.createReader(FileSystemFormatFactory.ReaderContext context) |
TypeInformation<RowData> |
JsonRowDataDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
void |
JsonFileSystemFormatFactory.JsonRowDataEncoder.encode(RowData element,
OutputStream stream) |
boolean |
JsonRowDataDeserializationSchema.isEndOfStream(RowData nextElement) |
RowData |
JsonFileSystemFormatFactory.JsonInputFormat.nextRecord(RowData record) |
RowData |
JsonFileSystemFormatFactory.JsonInputFormat.readRecord(RowData reuse,
byte[] bytes,
int offset,
int numBytes) |
byte[] |
JsonRowDataSerializationSchema.serialize(RowData row) |
Constructor and Description |
---|
JsonRowDataDeserializationSchema(RowType rowType,
TypeInformation<RowData> resultTypeInfo,
boolean failOnMissingField,
boolean ignoreParseErrors,
TimestampFormat timestampFormat) |
Modifier and Type | Method and Description |
---|---|
RowData |
CanalJsonDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
CanalJsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
CanalJsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
CanalJsonDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
CanalJsonDeserializationSchema.isEndOfStream(RowData nextElement) |
Modifier and Type | Method and Description |
---|---|
void |
CanalJsonDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Constructor and Description |
---|
CanalJsonDeserializationSchema(RowType rowType,
TypeInformation<RowData> resultTypeInfo,
boolean ignoreParseErrors,
TimestampFormat timestampFormatOption) |
Modifier and Type | Method and Description |
---|---|
RowData |
DebeziumJsonDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
DebeziumJsonFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
EncodingFormat<SerializationSchema<RowData>> |
DebeziumJsonFormatFactory.createEncodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
TypeInformation<RowData> |
DebeziumJsonDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
DebeziumJsonDeserializationSchema.isEndOfStream(RowData nextElement) |
Modifier and Type | Method and Description |
---|---|
void |
DebeziumJsonDeserializationSchema.deserialize(byte[] message,
Collector<RowData> out) |
Constructor and Description |
---|
DebeziumJsonDeserializationSchema(RowType rowType,
TypeInformation<RowData> resultTypeInfo,
boolean schemaInclude,
boolean ignoreParseErrors,
TimestampFormat timestampFormatOption) |
Modifier and Type | Method and Description |
---|---|
RowData |
ParquetFileSystemFormatFactory.ParquetInputFormat.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
Optional<BulkWriter.Factory<RowData>> |
ParquetFileSystemFormatFactory.createBulkWriterFactory(FileSystemFormatFactory.WriterContext context) |
Optional<Encoder<RowData>> |
ParquetFileSystemFormatFactory.createEncoder(FileSystemFormatFactory.WriterContext context) |
InputFormat<RowData,?> |
ParquetFileSystemFormatFactory.createReader(FileSystemFormatFactory.ReaderContext context) |
Modifier and Type | Method and Description |
---|---|
RowData |
ParquetFileSystemFormatFactory.ParquetInputFormat.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
org.apache.parquet.hadoop.ParquetWriter<RowData> |
ParquetRowDataBuilder.FlinkParquetBuilder.createWriter(org.apache.parquet.io.OutputFile out) |
static ParquetWriterFactory<RowData> |
ParquetRowDataBuilder.createWriterFactory(RowType rowType,
Configuration conf,
boolean utcTimestamp)
Create a parquet
BulkWriter.Factory . |
protected org.apache.parquet.hadoop.api.WriteSupport<RowData> |
ParquetRowDataBuilder.getWriteSupport(Configuration conf) |
Modifier and Type | Method and Description |
---|---|
void |
ParquetRowDataWriter.write(RowData record)
It writes a record to Parquet.
|
Modifier and Type | Method and Description |
---|---|
RowData |
OrcFileSystemFormatFactory.OrcRowDataInputFormat.nextRecord(RowData reuse) |
RowData |
OrcColumnarRowSplitReader.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
Optional<BulkWriter.Factory<RowData>> |
OrcFileSystemFormatFactory.createBulkWriterFactory(FileSystemFormatFactory.WriterContext context) |
Optional<Encoder<RowData>> |
OrcFileSystemFormatFactory.createEncoder(FileSystemFormatFactory.WriterContext context) |
InputFormat<RowData,?> |
OrcFileSystemFormatFactory.createReader(FileSystemFormatFactory.ReaderContext context) |
Modifier and Type | Method and Description |
---|---|
RowData |
OrcFileSystemFormatFactory.OrcRowDataInputFormat.nextRecord(RowData reuse) |
RowData |
OrcColumnarRowSplitReader.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
BulkWriter<RowData> |
OrcNoHiveBulkWriterFactory.create(FSDataOutputStream out) |
Modifier and Type | Method and Description |
---|---|
void |
RowDataVectorizer.vectorize(RowData row,
org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch batch) |
Modifier and Type | Field and Description |
---|---|
protected DecodingFormat<DeserializationSchema<RowData>> |
KafkaDynamicSourceBase.decodingFormat
Scan format for decoding records from Kafka.
|
protected EncodingFormat<SerializationSchema<RowData>> |
KafkaDynamicSinkBase.encodingFormat
Sink format for encoding records to Kafka.
|
protected Optional<FlinkKafkaPartitioner<RowData>> |
KafkaDynamicSinkBase.partitioner
Partitioner to select Kafka partition for each item.
|
Modifier and Type | Method and Description |
---|---|
protected FlinkKafkaConsumerBase<RowData> |
KafkaDynamicSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<RowData> deserializationSchema) |
protected FlinkKafkaConsumerBase<RowData> |
Kafka011DynamicSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<RowData> deserializationSchema) |
protected FlinkKafkaConsumerBase<RowData> |
Kafka010DynamicSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<RowData> deserializationSchema) |
protected abstract FlinkKafkaConsumerBase<RowData> |
KafkaDynamicSourceBase.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<RowData> deserializationSchema)
Creates a version-specific Kafka consumer.
|
protected SinkFunction<RowData> |
KafkaDynamicSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<RowData> serializationSchema,
Optional<FlinkKafkaPartitioner<RowData>> partitioner) |
protected SinkFunction<RowData> |
Kafka011DynamicSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<RowData> serializationSchema,
Optional<FlinkKafkaPartitioner<RowData>> partitioner) |
protected FlinkKafkaProducerBase<RowData> |
Kafka010DynamicSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<RowData> serializationSchema,
Optional<FlinkKafkaPartitioner<RowData>> partitioner) |
protected abstract SinkFunction<RowData> |
KafkaDynamicSinkBase.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<RowData> serializationSchema,
Optional<FlinkKafkaPartitioner<RowData>> partitioner)
Returns the version-specific Kafka producer.
|
static Optional<FlinkKafkaPartitioner<RowData>> |
KafkaOptions.getFlinkKafkaPartitioner(ReadableConfig tableOptions,
ClassLoader classLoader)
The partitioner can be either "fixed", "round-robin" or a customized partitioner full class
name.
|
protected FlinkKafkaConsumerBase<RowData> |
KafkaDynamicSourceBase.getKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<RowData> deserializationSchema)
Returns a version-specific Kafka consumer with the start position configured.
|
Modifier and Type | Method and Description |
---|---|
protected FlinkKafkaConsumerBase<RowData> |
KafkaDynamicSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<RowData> deserializationSchema) |
protected FlinkKafkaConsumerBase<RowData> |
Kafka011DynamicSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<RowData> deserializationSchema) |
protected FlinkKafkaConsumerBase<RowData> |
Kafka010DynamicSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<RowData> deserializationSchema) |
protected abstract FlinkKafkaConsumerBase<RowData> |
KafkaDynamicSourceBase.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<RowData> deserializationSchema)
Creates a version-specific Kafka consumer.
|
protected SinkFunction<RowData> |
KafkaDynamicSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<RowData> serializationSchema,
Optional<FlinkKafkaPartitioner<RowData>> partitioner) |
protected SinkFunction<RowData> |
KafkaDynamicSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<RowData> serializationSchema,
Optional<FlinkKafkaPartitioner<RowData>> partitioner) |
protected SinkFunction<RowData> |
Kafka011DynamicSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<RowData> serializationSchema,
Optional<FlinkKafkaPartitioner<RowData>> partitioner) |
protected SinkFunction<RowData> |
Kafka011DynamicSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<RowData> serializationSchema,
Optional<FlinkKafkaPartitioner<RowData>> partitioner) |
protected FlinkKafkaProducerBase<RowData> |
Kafka010DynamicSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<RowData> serializationSchema,
Optional<FlinkKafkaPartitioner<RowData>> partitioner) |
protected FlinkKafkaProducerBase<RowData> |
Kafka010DynamicSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<RowData> serializationSchema,
Optional<FlinkKafkaPartitioner<RowData>> partitioner) |
protected abstract SinkFunction<RowData> |
KafkaDynamicSinkBase.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<RowData> serializationSchema,
Optional<FlinkKafkaPartitioner<RowData>> partitioner)
Returns the version-specific Kafka producer.
|
protected abstract SinkFunction<RowData> |
KafkaDynamicSinkBase.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<RowData> serializationSchema,
Optional<FlinkKafkaPartitioner<RowData>> partitioner)
Returns the version-specific Kafka producer.
|
protected KafkaDynamicSinkBase |
KafkaDynamicTableFactory.createKafkaTableSink(DataType consumedDataType,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<RowData>> partitioner,
EncodingFormat<SerializationSchema<RowData>> encodingFormat) |
protected KafkaDynamicSinkBase |
KafkaDynamicTableFactory.createKafkaTableSink(DataType consumedDataType,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<RowData>> partitioner,
EncodingFormat<SerializationSchema<RowData>> encodingFormat) |
protected KafkaDynamicSinkBase |
Kafka011DynamicTableFactory.createKafkaTableSink(DataType consumedDataType,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<RowData>> partitioner,
EncodingFormat<SerializationSchema<RowData>> encodingFormat) |
protected KafkaDynamicSinkBase |
Kafka011DynamicTableFactory.createKafkaTableSink(DataType consumedDataType,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<RowData>> partitioner,
EncodingFormat<SerializationSchema<RowData>> encodingFormat) |
protected KafkaDynamicSinkBase |
Kafka010DynamicTableFactory.createKafkaTableSink(DataType consumedDataType,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<RowData>> partitioner,
EncodingFormat<SerializationSchema<RowData>> encodingFormat) |
protected KafkaDynamicSinkBase |
Kafka010DynamicTableFactory.createKafkaTableSink(DataType consumedDataType,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<RowData>> partitioner,
EncodingFormat<SerializationSchema<RowData>> encodingFormat) |
protected abstract KafkaDynamicSinkBase |
KafkaDynamicTableFactoryBase.createKafkaTableSink(DataType consumedDataType,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<RowData>> partitioner,
EncodingFormat<SerializationSchema<RowData>> encodingFormat)
Constructs the version-specific Kafka table sink.
|
protected abstract KafkaDynamicSinkBase |
KafkaDynamicTableFactoryBase.createKafkaTableSink(DataType consumedDataType,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<RowData>> partitioner,
EncodingFormat<SerializationSchema<RowData>> encodingFormat)
Constructs the version-specific Kafka table sink.
|
protected KafkaDynamicSourceBase |
KafkaDynamicTableFactory.createKafkaTableSource(DataType producedDataType,
String topic,
Properties properties,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis) |
protected KafkaDynamicSourceBase |
Kafka011DynamicTableFactory.createKafkaTableSource(DataType producedDataType,
String topic,
Properties properties,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis) |
protected KafkaDynamicSourceBase |
Kafka010DynamicTableFactory.createKafkaTableSource(DataType producedDataType,
String topic,
Properties properties,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis) |
protected abstract KafkaDynamicSourceBase |
KafkaDynamicTableFactoryBase.createKafkaTableSource(DataType producedDataType,
String topic,
Properties properties,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets,
long startupTimestampMillis)
Constructs the version-specific Kafka table source.
|
protected FlinkKafkaConsumerBase<RowData> |
KafkaDynamicSourceBase.getKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<RowData> deserializationSchema)
Returns a version-specific Kafka consumer with the start position configured.
|
Modifier and Type | Method and Description |
---|---|
BulkWriter.Factory<RowData> |
HiveShimV200.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
BulkWriter.Factory<RowData> |
HiveShimV100.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes) |
BulkWriter.Factory<RowData> |
HiveShim.createOrcBulkWriterFactory(Configuration conf,
String schema,
LogicalType[] fieldTypes)
Create orc
BulkWriter.Factory for different hive versions. |
Modifier and Type | Method and Description |
---|---|
OutputFormat<RowData> |
OutputFormatProvider.createOutputFormat()
Creates an
OutputFormat instance. |
SinkFunction<RowData> |
SinkFunctionProvider.createSinkFunction()
Creates a
SinkFunction instance. |
Modifier and Type | Method and Description |
---|---|
static OutputFormatProvider |
OutputFormatProvider.of(OutputFormat<RowData> outputFormat)
Helper method for creating a static provider.
|
static SinkFunctionProvider |
SinkFunctionProvider.of(SinkFunction<RowData> sinkFunction)
Helper method for creating a static provider.
|
Modifier and Type | Method and Description |
---|---|
InputFormat<RowData,?> |
InputFormatProvider.createInputFormat()
Creates an
InputFormat instance. |
SourceFunction<RowData> |
SourceFunctionProvider.createSourceFunction()
Creates a
SourceFunction instance. |
Modifier and Type | Method and Description |
---|---|
static InputFormatProvider |
InputFormatProvider.of(InputFormat<RowData,?> inputFormat)
Helper method for creating a static provider.
|
static SourceFunctionProvider |
SourceFunctionProvider.of(SourceFunction<RowData> sourceFunction,
boolean isBounded)
Helper method for creating a static provider.
|
Modifier and Type | Method and Description |
---|---|
RowData |
SupportsComputedColumnPushDown.ComputedColumnConverter.convert(RowData producedRow)
Generates and adds computed columns to
RowData if necessary. |
Modifier and Type | Method and Description |
---|---|
AssignerWithPeriodicWatermarks<RowData> |
PeriodicWatermarkAssignerProvider.getPeriodicWatermarkAssigner() |
AssignerWithPunctuatedWatermarks<RowData> |
PunctuatedWatermarkAssignerProvider.getPunctuatedWatermarkAssigner() |
Modifier and Type | Method and Description |
---|---|
RowData |
SupportsComputedColumnPushDown.ComputedColumnConverter.convert(RowData producedRow)
Generates and adds computed columns to
RowData if necessary. |
Modifier and Type | Class and Description |
---|---|
class |
BoxedWrapperRowData
An implementation of
RowData which also is also backed by an array of Java Object , just similar to GenericRowData . |
class |
ColumnarRowData
Columnar row to support access to vector column data.
|
class |
GenericRowData
An internal data structure representing data of
RowType and other (possibly nested)
structured types such as StructuredType . |
class |
JoinedRowData
|
class |
UpdatableRowData
|
Modifier and Type | Method and Description |
---|---|
RowData |
UpdatableRowData.getRow() |
RowData |
ColumnarRowData.getRow(int pos,
int numFields) |
RowData |
ColumnarArrayData.getRow(int pos,
int numFields) |
RowData |
BoxedWrapperRowData.getRow(int pos,
int numFields) |
RowData |
UpdatableRowData.getRow(int pos,
int numFields) |
RowData |
JoinedRowData.getRow(int pos,
int numFields) |
RowData |
GenericRowData.getRow(int pos,
int numFields) |
RowData |
GenericArrayData.getRow(int pos,
int numFields) |
RowData |
RowData.getRow(int pos,
int numFields)
Returns the row value at the given position.
|
RowData |
ArrayData.getRow(int pos,
int numFields)
Returns the row value at the given position.
|
Modifier and Type | Method and Description |
---|---|
static Object |
RowData.get(RowData row,
int pos,
LogicalType fieldType)
Deprecated.
Use
createFieldGetter(LogicalType, int) for avoiding logical types
during runtime. |
Object |
RowData.FieldGetter.getFieldOrNull(RowData row) |
JoinedRowData |
JoinedRowData.replace(RowData row1,
RowData row2) |
Constructor and Description |
---|
JoinedRowData(RowData row1,
RowData row2) |
UpdatableRowData(RowData row,
int arity) |
Modifier and Type | Class and Description |
---|---|
class |
BinaryRowData
An implementation of
RowData which is backed by MemorySegment instead of Object. |
class |
NestedRowData
Its memory storage structure is exactly the same with
BinaryRowData . |
Modifier and Type | Method and Description |
---|---|
RowData |
BinaryRowData.getRow(int pos,
int numFields) |
RowData |
BinaryArrayData.getRow(int pos,
int numFields) |
RowData |
NestedRowData.getRow(int pos,
int numFields) |
static RowData |
BinarySegmentUtils.readRowData(MemorySegment[] segments,
int numFields,
int baseOffset,
long offsetAndSize)
Gets an instance of
RowData from underlying MemorySegment . |
Modifier and Type | Method and Description |
---|---|
NestedRowData |
NestedRowData.copy(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
RowData |
RowRowConverter.toInternal(Row external) |
RowData |
StructuredObjectConverter.toInternal(T external) |
Modifier and Type | Method and Description |
---|---|
T |
StructuredObjectConverter.toExternal(RowData internal) |
Row |
RowRowConverter.toExternal(RowData internal) |
Modifier and Type | Method and Description |
---|---|
static boolean |
RowDataUtil.isAccumulateMsg(RowData row)
Returns true if the message is either
RowKind.INSERT or RowKind.UPDATE_AFTER ,
which refers to an accumulate operation of aggregation. |
static boolean |
RowDataUtil.isRetractMsg(RowData row)
Returns true if the message is either
RowKind.DELETE or RowKind.UPDATE_BEFORE , which refers to a retract operation of aggregation. |
External |
DataFormatConverters.DataFormatConverter.toExternal(RowData row,
int column)
Given a internalType row, convert the value at column `column` to its external(Java)
equivalent.
|
static GenericRowData |
RowDataUtil.toGenericRow(RowData row,
LogicalType[] types) |
Modifier and Type | Method and Description |
---|---|
RowData |
VectorizedColumnBatch.getRow(int rowId,
int colId) |
Modifier and Type | Method and Description |
---|---|
void |
BinaryWriter.writeRow(int pos,
RowData value,
RowDataSerializer serializer) |
Modifier and Type | Method and Description |
---|---|
RowData |
ChangelogCsvDeserializer.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DecodingFormat<DeserializationSchema<RowData>> |
ChangelogCsvFormatFactory.createDecodingFormat(DynamicTableFactory.Context context,
ReadableConfig formatOptions) |
DeserializationSchema<RowData> |
ChangelogCsvFormat.createRuntimeDecoder(DynamicTableSource.Context context,
DataType producedDataType) |
TypeInformation<RowData> |
ChangelogCsvDeserializer.getProducedType() |
TypeInformation<RowData> |
SocketSourceFunction.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
ChangelogCsvDeserializer.isEndOfStream(RowData nextElement) |
Modifier and Type | Method and Description |
---|---|
void |
SocketSourceFunction.run(SourceFunction.SourceContext<RowData> ctx) |
Constructor and Description |
---|
ChangelogCsvDeserializer(List<LogicalType> parsingTypes,
DynamicTableSource.DataStructureConverter converter,
TypeInformation<RowData> producedTypeInfo,
String columnDelimiter) |
SocketDynamicTableSource(String hostname,
int port,
byte byteDelimiter,
DecodingFormat<DeserializationSchema<RowData>> decodingFormat,
DataType producedDataType) |
SocketSourceFunction(String hostname,
int port,
byte byteDelimiter,
DeserializationSchema<RowData> deserializer) |
Modifier and Type | Method and Description |
---|---|
Optional<BulkWriter.Factory<RowData>> |
FileSystemFormatFactory.createBulkWriterFactory(FileSystemFormatFactory.WriterContext context)
Create
BulkWriter.Factory writer. |
Optional<Encoder<RowData>> |
FileSystemFormatFactory.createEncoder(FileSystemFormatFactory.WriterContext context)
Create
Encoder writer. |
InputFormat<RowData,?> |
FileSystemFormatFactory.createReader(FileSystemFormatFactory.ReaderContext context)
Create
InputFormat reader. |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataPartitionComputer.projectColumnsToWrite(RowData in) |
Modifier and Type | Method and Description |
---|---|
LinkedHashMap<String,String> |
RowDataPartitionComputer.generatePartValues(RowData in) |
String |
FileSystemTableSink.TableBucketAssigner.getBucketId(RowData element,
BucketAssigner.Context context) |
RowData |
RowDataPartitionComputer.projectColumnsToWrite(RowData in) |
boolean |
FileSystemTableSink.TableRollingPolicy.shouldRollOnEvent(PartFileInfo<String> partFileState,
RowData element) |
Constructor and Description |
---|
FileSystemLookupFunction(InputFormat<RowData,T> inputFormat,
String[] lookupKeys,
String[] producedNames,
DataType[] producedTypes,
java.time.Duration cacheTTL) |
ProjectionBulkFactory(BulkWriter.Factory<RowData> factory,
RowDataPartitionComputer computer) |
TableBucketAssigner(PartitionComputer<RowData> computer) |
Modifier and Type | Method and Description |
---|---|
void |
StreamingFileWriter.processElement(StreamRecord<RowData> element) |
Constructor and Description |
---|
StreamingFileWriter(long bucketCheckInterval,
StreamingFileSink.BucketsBuilder<RowData,String,? extends StreamingFileSink.BucketsBuilder<RowData,String,?>> bucketsBuilder) |
StreamingFileWriter(long bucketCheckInterval,
StreamingFileSink.BucketsBuilder<RowData,String,? extends StreamingFileSink.BucketsBuilder<RowData,String,?>> bucketsBuilder) |
Modifier and Type | Method and Description |
---|---|
static ArrowWriter<RowData> |
ArrowUtils.createRowDataArrowWriter(org.apache.arrow.vector.VectorSchemaRoot root,
RowType rowType)
Creates an
ArrowWriter for blink planner for the specified VectorSchemaRoot . |
Modifier and Type | Method and Description |
---|---|
DataStream<RowData> |
ArrowTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
TypeInformation<RowData> |
ArrowSourceFunction.getProducedType() |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataArrowReader.read(int rowId) |
Modifier and Type | Method and Description |
---|---|
static BigIntWriter<RowData> |
BigIntWriter.forRow(org.apache.arrow.vector.BigIntVector bigIntVector) |
static BooleanWriter<RowData> |
BooleanWriter.forRow(org.apache.arrow.vector.BitVector bitVector) |
static DateWriter<RowData> |
DateWriter.forRow(org.apache.arrow.vector.DateDayVector dateDayVector) |
static DecimalWriter<RowData> |
DecimalWriter.forRow(org.apache.arrow.vector.DecimalVector decimalVector,
int precision,
int scale) |
static FloatWriter<RowData> |
FloatWriter.forRow(org.apache.arrow.vector.Float4Vector floatVector) |
static DoubleWriter<RowData> |
DoubleWriter.forRow(org.apache.arrow.vector.Float8Vector doubleVector) |
static IntWriter<RowData> |
IntWriter.forRow(org.apache.arrow.vector.IntVector intVector) |
static ArrayWriter<RowData> |
ArrayWriter.forRow(org.apache.arrow.vector.complex.ListVector listVector,
ArrowFieldWriter<ArrayData> elementWriter) |
static SmallIntWriter<RowData> |
SmallIntWriter.forRow(org.apache.arrow.vector.SmallIntVector intVector) |
static RowWriter<RowData> |
RowWriter.forRow(org.apache.arrow.vector.complex.StructVector structVector,
ArrowFieldWriter<RowData>[] fieldsWriters) |
static TinyIntWriter<RowData> |
TinyIntWriter.forRow(org.apache.arrow.vector.TinyIntVector tinyIntVector) |
static TimeWriter<RowData> |
TimeWriter.forRow(org.apache.arrow.vector.ValueVector valueVector) |
static TimestampWriter<RowData> |
TimestampWriter.forRow(org.apache.arrow.vector.ValueVector valueVector,
int precision) |
static VarBinaryWriter<RowData> |
VarBinaryWriter.forRow(org.apache.arrow.vector.VarBinaryVector varBinaryVector) |
static VarCharWriter<RowData> |
VarCharWriter.forRow(org.apache.arrow.vector.VarCharVector varCharVector) |
Modifier and Type | Method and Description |
---|---|
RowData |
ExecutionContext.currentKey() |
RowData |
ExecutionContextImpl.currentKey() |
Modifier and Type | Method and Description |
---|---|
void |
ExecutionContext.setCurrentKey(RowData key)
Sets current key.
|
void |
ExecutionContextImpl.setCurrentKey(RowData key) |
Modifier and Type | Interface and Description |
---|---|
interface |
Projection<IN extends RowData,OUT extends RowData>
Interface for code generated projection, which will map a RowData to another one.
|
interface |
Projection<IN extends RowData,OUT extends RowData>
Interface for code generated projection, which will map a RowData to another one.
|
Modifier and Type | Method and Description |
---|---|
RowData |
AggsHandleFunctionBase.createAccumulators()
Initializes the accumulators and save them to a accumulators row.
|
RowData |
NamespaceAggsHandleFunctionBase.createAccumulators()
Initializes the accumulators and save them to a accumulators row.
|
RowData |
AggsHandleFunctionBase.getAccumulators()
Gets the current accumulators (saved in a row) which contains the current aggregated results.
|
RowData |
NamespaceAggsHandleFunctionBase.getAccumulators()
Gets the current accumulators (saved in a row) which contains the current aggregated results.
|
RowData |
AggsHandleFunction.getValue()
Gets the result of the aggregation from the current accumulators.
|
RowData |
NamespaceAggsHandleFunction.getValue(N namespace)
Gets the result of the aggregation from the current accumulators and namespace properties
(like window start).
|
Modifier and Type | Method and Description |
---|---|
void |
AggsHandleFunctionBase.accumulate(RowData input)
Accumulates the input values to the accumulators.
|
void |
NamespaceAggsHandleFunctionBase.accumulate(RowData inputRow)
Accumulates the input values to the accumulators.
|
boolean |
JoinCondition.apply(RowData in1,
RowData in2) |
int |
RecordComparator.compare(RowData o1,
RowData o2) |
abstract Long |
WatermarkGenerator.currentWatermark(RowData row)
Returns the watermark for the current row or null if no watermark should be generated.
|
void |
TableAggsHandleFunction.emitValue(Collector<RowData> out,
RowData currentKey,
boolean isRetract)
Emit the result of the table aggregation through the collector.
|
void |
NamespaceTableAggsHandleFunction.emitValue(N namespace,
RowData key,
Collector<RowData> out)
Emits the result of the aggregation from the current accumulators and namespace properties
(like window start).
|
boolean |
RecordEqualiser.equals(RowData row1,
RowData row2)
Returns
true if the rows are equal to each other and false otherwise. |
int |
HashFunction.hashCode(RowData row) |
void |
NamespaceAggsHandleFunctionBase.merge(N namespace,
RowData otherAcc)
Merges the other accumulators into current accumulators.
|
void |
AggsHandleFunctionBase.merge(RowData accumulators)
Merges the other accumulators into current accumulators.
|
void |
NormalizedKeyComputer.putKey(RowData record,
MemorySegment target,
int offset)
Writes a normalized key for the given record into the target
MemorySegment . |
void |
AggsHandleFunctionBase.retract(RowData input)
Retracts the input values from the accumulators.
|
void |
NamespaceAggsHandleFunctionBase.retract(RowData inputRow)
Retracts the input values from the accumulators.
|
void |
NamespaceAggsHandleFunctionBase.setAccumulators(N namespace,
RowData accumulators)
Set the current accumulators (saved in a row) which contains the current aggregated results.
|
void |
AggsHandleFunctionBase.setAccumulators(RowData accumulators)
Set the current accumulators (saved in a row) which contains the current aggregated results.
|
Modifier and Type | Method and Description |
---|---|
void |
TableAggsHandleFunction.emitValue(Collector<RowData> out,
RowData currentKey,
boolean isRetract)
Emit the result of the table aggregation through the collector.
|
void |
NamespaceTableAggsHandleFunction.emitValue(N namespace,
RowData key,
Collector<RowData> out)
Emits the result of the aggregation from the current accumulators and namespace properties
(like window start).
|
Modifier and Type | Class and Description |
---|---|
class |
WrappedRowIterator<T extends RowData>
Wrap
MutableObjectIterator to java RowIterator . |
Modifier and Type | Method and Description |
---|---|
RowData |
ProbeIterator.current() |
RowData |
BinaryHashTable.getCurrentProbeRow() |
RowData |
LongHybridHashTable.getCurrentProbeRow() |
Modifier and Type | Method and Description |
---|---|
abstract long |
LongHybridHashTable.getBuildLongKey(RowData row)
For code gen get build side long key.
|
abstract long |
LongHybridHashTable.getProbeLongKey(RowData row)
For code gen get probe side long key.
|
abstract BinaryRowData |
LongHybridHashTable.probeToBinary(RowData row)
For code gen probe side to BinaryRowData.
|
void |
BinaryHashTable.putBuildRow(RowData row)
Put a build side row to hash table.
|
void |
ProbeIterator.setInstance(RowData instance) |
boolean |
BinaryHashTable.tryProbe(RowData record)
Find matched build side rows for a probe row.
|
boolean |
LongHybridHashTable.tryProbe(RowData record) |
Constructor and Description |
---|
BinaryHashTable(Configuration conf,
Object owner,
AbstractRowDataSerializer buildSideSerializer,
AbstractRowDataSerializer probeSideSerializer,
Projection<RowData,BinaryRowData> buildSideProjection,
Projection<RowData,BinaryRowData> probeSideProjection,
MemoryManager memManager,
long reservedMemorySize,
IOManager ioManager,
int avgRecordLen,
long buildRowCount,
boolean useBloomFilters,
HashJoinType type,
JoinCondition condFunc,
boolean reverseJoin,
boolean[] filterNulls,
boolean tryDistinctBuildRow) |
BinaryHashTable(Configuration conf,
Object owner,
AbstractRowDataSerializer buildSideSerializer,
AbstractRowDataSerializer probeSideSerializer,
Projection<RowData,BinaryRowData> buildSideProjection,
Projection<RowData,BinaryRowData> probeSideProjection,
MemoryManager memManager,
long reservedMemorySize,
IOManager ioManager,
int avgRecordLen,
long buildRowCount,
boolean useBloomFilters,
HashJoinType type,
JoinCondition condFunc,
boolean reverseJoin,
boolean[] filterNulls,
boolean tryDistinctBuildRow) |
Modifier and Type | Method and Description |
---|---|
RowData |
EmptyRowDataKeySelector.getKey(RowData value) |
RowData |
BinaryRowDataKeySelector.getKey(RowData value) |
Modifier and Type | Method and Description |
---|---|
RowData |
EmptyRowDataKeySelector.getKey(RowData value) |
RowData |
BinaryRowDataKeySelector.getKey(RowData value) |
Modifier and Type | Method and Description |
---|---|
RowData |
MiniBatchLocalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
RowData |
MiniBatchIncrementalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
RowData |
MiniBatchGlobalGroupAggFunction.addInput(RowData previousAcc,
RowData input)
The
previousAcc is accumulator, but input is a row in <key, accumulator>
schema, the specific generated MiniBatchGlobalGroupAggFunction.localAgg will project the input to
accumulator in merge method. |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
MiniBatchGroupAggFunction.addInput(List<RowData> value,
RowData input) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
MiniBatchGroupAggFunction.addInput(List<RowData> value,
RowData input) |
RowData |
MiniBatchLocalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
RowData |
MiniBatchIncrementalGroupAggFunction.addInput(RowData previousAcc,
RowData input) |
RowData |
MiniBatchGlobalGroupAggFunction.addInput(RowData previousAcc,
RowData input)
The
previousAcc is accumulator, but input is a row in <key, accumulator>
schema, the specific generated MiniBatchGlobalGroupAggFunction.localAgg will project the input to
accumulator in merge method. |
void |
GroupTableAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
GroupAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
List<RowData> |
MiniBatchGroupAggFunction.addInput(List<RowData> value,
RowData input) |
void |
MiniBatchGroupAggFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
MiniBatchGroupAggFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
MiniBatchGroupAggFunction.finishBundle(Map<RowData,List<RowData>> buffer,
Collector<RowData> out) |
void |
MiniBatchLocalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchLocalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchLocalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchIncrementalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchIncrementalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchIncrementalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchGlobalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchGlobalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchGlobalGroupAggFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
GroupTableAggFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
GroupAggFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
GroupTableAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
GroupAggFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Constructor and Description |
---|
MiniBatchIncrementalGroupAggFunction(GeneratedAggsHandleFunction genPartialAggsHandler,
GeneratedAggsHandleFunction genFinalAggsHandler,
KeySelector<RowData,RowData> finalKeySelector) |
MiniBatchIncrementalGroupAggFunction(GeneratedAggsHandleFunction genPartialAggsHandler,
GeneratedAggsHandleFunction genFinalAggsHandler,
KeySelector<RowData,RowData> finalKeySelector) |
Modifier and Type | Method and Description |
---|---|
RowData |
MiniBatchDeduplicateKeepFirstRowFunction.addInput(RowData value,
RowData input) |
RowData |
MiniBatchDeduplicateKeepLastRowFunction.addInput(RowData value,
RowData input) |
Modifier and Type | Method and Description |
---|---|
RowData |
MiniBatchDeduplicateKeepFirstRowFunction.addInput(RowData value,
RowData input) |
RowData |
MiniBatchDeduplicateKeepLastRowFunction.addInput(RowData value,
RowData input) |
void |
DeduplicateKeepLastRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
DeduplicateKeepFirstRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
void |
MiniBatchDeduplicateKeepFirstRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchDeduplicateKeepFirstRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchDeduplicateKeepFirstRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchDeduplicateKeepLastRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchDeduplicateKeepLastRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
MiniBatchDeduplicateKeepLastRowFunction.finishBundle(Map<RowData,RowData> buffer,
Collector<RowData> out) |
void |
DeduplicateKeepLastRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
DeduplicateKeepFirstRowFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
Constructor and Description |
---|
MiniBatchDeduplicateKeepFirstRowFunction(TypeSerializer<RowData> typeSerializer,
long minRetentionTime) |
MiniBatchDeduplicateKeepLastRowFunction(RowDataTypeInfo rowTypeInfo,
boolean generateUpdateBefore,
boolean generateInsert,
TypeSerializer<RowData> typeSerializer,
long minRetentionTime) |
Modifier and Type | Method and Description |
---|---|
RowData |
SortMergeJoinIterator.getProbeRow() |
RowData |
OuterJoinPaddingUtil.padLeft(RowData leftRow)
Returns a padding result with the given left row.
|
RowData |
OuterJoinPaddingUtil.padRight(RowData rightRow)
Returns a padding result with the given right row.
|
Modifier and Type | Method and Description |
---|---|
abstract void |
HashJoinOperator.join(RowIterator<BinaryRowData> buildIter,
RowData probeRow) |
RowData |
OuterJoinPaddingUtil.padLeft(RowData leftRow)
Returns a padding result with the given left row.
|
RowData |
OuterJoinPaddingUtil.padRight(RowData rightRow)
Returns a padding result with the given right row.
|
Modifier and Type | Method and Description |
---|---|
void |
HashJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
SortMergeJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
HashJoinOperator.processElement2(StreamRecord<RowData> element) |
void |
SortMergeJoinOperator.processElement2(StreamRecord<RowData> element) |
Constructor and Description |
---|
ProcTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
RowDataTypeInfo leftType,
RowDataTypeInfo rightType,
GeneratedFunction<FlatJoinFunction<RowData,RowData,RowData>> genJoinFunc) |
ProcTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
RowDataTypeInfo leftType,
RowDataTypeInfo rightType,
GeneratedFunction<FlatJoinFunction<RowData,RowData,RowData>> genJoinFunc) |
ProcTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
RowDataTypeInfo leftType,
RowDataTypeInfo rightType,
GeneratedFunction<FlatJoinFunction<RowData,RowData,RowData>> genJoinFunc) |
RowTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
long allowedLateness,
RowDataTypeInfo leftType,
RowDataTypeInfo rightType,
GeneratedFunction<FlatJoinFunction<RowData,RowData,RowData>> genJoinFunc,
int leftTimeIdx,
int rightTimeIdx) |
RowTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
long allowedLateness,
RowDataTypeInfo leftType,
RowDataTypeInfo rightType,
GeneratedFunction<FlatJoinFunction<RowData,RowData,RowData>> genJoinFunc,
int leftTimeIdx,
int rightTimeIdx) |
RowTimeIntervalJoin(FlinkJoinType joinType,
long leftLowerBound,
long leftUpperBound,
long allowedLateness,
RowDataTypeInfo leftType,
RowDataTypeInfo rightType,
GeneratedFunction<FlatJoinFunction<RowData,RowData,RowData>> genJoinFunc,
int leftTimeIdx,
int rightTimeIdx) |
Modifier and Type | Field and Description |
---|---|
protected TableFunctionCollector<RowData> |
LookupJoinRunner.collector |
Modifier and Type | Method and Description |
---|---|
TableFunctionResultFuture<RowData> |
AsyncLookupJoinWithCalcRunner.createFetcherResultFuture(Configuration parameters) |
TableFunctionResultFuture<RowData> |
AsyncLookupJoinRunner.createFetcherResultFuture(Configuration parameters) |
Collector<RowData> |
LookupJoinRunner.getFetcherCollector() |
Collector<RowData> |
LookupJoinWithCalcRunner.getFetcherCollector() |
Modifier and Type | Method and Description |
---|---|
void |
AsyncLookupJoinRunner.asyncInvoke(RowData input,
ResultFuture<RowData> resultFuture) |
void |
LookupJoinRunner.processElement(RowData in,
ProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
void |
AsyncLookupJoinRunner.asyncInvoke(RowData input,
ResultFuture<RowData> resultFuture) |
void |
LookupJoinRunner.processElement(RowData in,
ProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Field and Description |
---|---|
RowData |
AbstractStreamingJoinOperator.OuterRecord.record |
Modifier and Type | Field and Description |
---|---|
protected TimestampedCollector<RowData> |
AbstractStreamingJoinOperator.collector |
Modifier and Type | Method and Description |
---|---|
Iterable<RowData> |
AbstractStreamingJoinOperator.AssociatedRecords.getRecords()
Gets the iterable of records.
|
Modifier and Type | Method and Description |
---|---|
static AbstractStreamingJoinOperator.AssociatedRecords |
AbstractStreamingJoinOperator.AssociatedRecords.of(RowData input,
boolean inputIsLeft,
JoinRecordStateView otherSideStateView,
JoinCondition condition)
Creates an
AbstractStreamingJoinOperator.AssociatedRecords which represents the records associated to the input
row. |
Modifier and Type | Method and Description |
---|---|
void |
StreamingSemiAntiJoinOperator.processElement1(StreamRecord<RowData> element)
Process an input element and output incremental joined records, retraction messages will be
sent in some scenarios.
|
void |
StreamingJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
StreamingSemiAntiJoinOperator.processElement2(StreamRecord<RowData> element)
Process an input element and output incremental joined records, retraction messages will be
sent in some scenarios.
|
void |
StreamingJoinOperator.processElement2(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
Iterable<RowData> |
JoinRecordStateView.getRecords()
Gets all the records under the current context (i.e.
|
Iterable<Tuple2<RowData,Integer>> |
OuterJoinRecordStateView.getRecordsAndNumOfAssociations()
Gets all the records and number of associations under the current context (i.e.
|
KeySelector<RowData,RowData> |
JoinInputSideSpec.getUniqueKeySelector()
Returns the
KeySelector to extract unique key from the input row. |
KeySelector<RowData,RowData> |
JoinInputSideSpec.getUniqueKeySelector()
Returns the
KeySelector to extract unique key from the input row. |
Modifier and Type | Method and Description |
---|---|
void |
JoinRecordStateView.addRecord(RowData record)
Add a new record to the state view.
|
void |
OuterJoinRecordStateView.addRecord(RowData record,
int numOfAssociations)
Adds a new record with the number of associations to the state view.
|
void |
JoinRecordStateView.retractRecord(RowData record)
Retract the record from the state view.
|
void |
OuterJoinRecordStateView.updateNumOfAssociations(RowData record,
int numOfAssociations)
Updates the number of associations belongs to the record.
|
Modifier and Type | Method and Description |
---|---|
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKey(RowDataTypeInfo uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that the input has an unique key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKey(RowDataTypeInfo uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that the input has an unique key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKeyContainedByJoinKey(RowDataTypeInfo uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that input has an unique key and the unique key is
contained by the join key. |
static JoinInputSideSpec |
JoinInputSideSpec.withUniqueKeyContainedByJoinKey(RowDataTypeInfo uniqueKeyType,
KeySelector<RowData,RowData> uniqueKeySelector)
Creates a
JoinInputSideSpec that input has an unique key and the unique key is
contained by the join key. |
Modifier and Type | Method and Description |
---|---|
void |
TemporalRowTimeJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
TemporalProcessTimeJoinOperator.processElement1(StreamRecord<RowData> element) |
void |
TemporalRowTimeJoinOperator.processElement2(StreamRecord<RowData> element) |
void |
TemporalProcessTimeJoinOperator.processElement2(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<RowData> |
RowtimeProcessFunction.getProducedType() |
Modifier and Type | Method and Description |
---|---|
int |
RowDataEventComparator.compare(RowData row1,
RowData row2) |
boolean |
IterativeConditionRunner.filter(RowData value,
IterativeCondition.Context<RowData> ctx) |
void |
RowtimeProcessFunction.processElement(RowData value,
ProcessFunction.Context ctx,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
boolean |
IterativeConditionRunner.filter(RowData value,
IterativeCondition.Context<RowData> ctx) |
void |
RowtimeProcessFunction.processElement(RowData value,
ProcessFunction.Context ctx,
Collector<RowData> out) |
void |
PatternProcessFunctionRunner.processMatch(Map<String,List<RowData>> match,
PatternProcessFunction.Context ctx,
Collector<RowData> out) |
void |
PatternProcessFunctionRunner.processMatch(Map<String,List<RowData>> match,
PatternProcessFunction.Context ctx,
Collector<RowData> out) |
Constructor and Description |
---|
IterativeConditionRunner(GeneratedFunction<RichIterativeCondition<RowData>> generatedFunction) |
PatternProcessFunctionRunner(GeneratedFunction<PatternProcessFunction<RowData,RowData>> generatedFunction) |
PatternProcessFunctionRunner(GeneratedFunction<PatternProcessFunction<RowData,RowData>> generatedFunction) |
RowtimeProcessFunction(int rowtimeIdx,
TypeInformation<RowData> returnType,
int precision) |
Modifier and Type | Method and Description |
---|---|
void |
ProcTimeUnboundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeRangeBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeRowsBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
RowTimeRowsBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
RowTimeRangeBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
AbstractRowTimeUnboundedPrecedingOver.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out)
Puts an element from the input stream into state if it is not late.
|
Modifier and Type | Method and Description |
---|---|
void |
ProcTimeUnboundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
ProcTimeRangeBoundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
ProcTimeRowsBoundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
RowTimeRowsBoundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
RowTimeRangeBoundedPrecedingFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
AbstractRowTimeUnboundedPrecedingOver.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
ProcTimeUnboundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeRangeBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
ProcTimeRowsBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
RowTimeRowsBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
RowTimeRangeBoundedPrecedingFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
AbstractRowTimeUnboundedPrecedingOver.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out)
Puts an element from the input stream into state if it is not late.
|
void |
BufferDataOverWindowOperator.processElement(StreamRecord<RowData> element) |
void |
NonBufferOverWindowOperator.processElement(StreamRecord<RowData> element) |
void |
RowTimeRowsUnboundedPrecedingFunction.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out) |
void |
RowTimeRowsUnboundedPrecedingFunction.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out) |
protected abstract void |
AbstractRowTimeUnboundedPrecedingOver.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out)
Process the same timestamp datas, the mechanism is different between rows and range window.
|
protected abstract void |
AbstractRowTimeUnboundedPrecedingOver.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out)
Process the same timestamp datas, the mechanism is different between rows and range window.
|
void |
RowTimeRangeUnboundedPrecedingFunction.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out) |
void |
RowTimeRangeUnboundedPrecedingFunction.processElementsWithSameTimestamp(List<RowData> curRowList,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
RowData |
OverWindowFrame.process(int index,
RowData current)
return the ACC of the window frame.
|
RowData |
RowSlidingOverFrame.process(int index,
RowData current) |
RowData |
RowUnboundedFollowingOverFrame.process(int index,
RowData current) |
RowData |
RangeUnboundedPrecedingOverFrame.process(int index,
RowData current) |
RowData |
RangeUnboundedFollowingOverFrame.process(int index,
RowData current) |
RowData |
OffsetOverFrame.process(int index,
RowData current) |
RowData |
RowUnboundedPrecedingOverFrame.process(int index,
RowData current) |
RowData |
UnboundedOverWindowFrame.process(int index,
RowData current) |
RowData |
InsensitiveOverFrame.process(int index,
RowData current) |
RowData |
RangeSlidingOverFrame.process(int index,
RowData current) |
Modifier and Type | Method and Description |
---|---|
long |
OffsetOverFrame.CalcOffsetFunc.calc(RowData row) |
RowData |
OverWindowFrame.process(int index,
RowData current)
return the ACC of the window frame.
|
RowData |
RowSlidingOverFrame.process(int index,
RowData current) |
RowData |
RowUnboundedFollowingOverFrame.process(int index,
RowData current) |
RowData |
RangeUnboundedPrecedingOverFrame.process(int index,
RowData current) |
RowData |
RangeUnboundedFollowingOverFrame.process(int index,
RowData current) |
RowData |
OffsetOverFrame.process(int index,
RowData current) |
RowData |
RowUnboundedPrecedingOverFrame.process(int index,
RowData current) |
RowData |
UnboundedOverWindowFrame.process(int index,
RowData current) |
RowData |
InsensitiveOverFrame.process(int index,
RowData current) |
RowData |
RangeSlidingOverFrame.process(int index,
RowData current) |
Modifier and Type | Method and Description |
---|---|
void |
AbstractStatelessFunctionOperator.StreamRecordRowDataWrappingCollector.collect(RowData record) |
Constructor and Description |
---|
StreamRecordRowDataWrappingCollector(Collector<StreamRecord<RowData>> out) |
Modifier and Type | Method and Description |
---|---|
RowData |
AbstractRowDataPythonScalarFunctionOperator.getFunctionInput(RowData element) |
Modifier and Type | Method and Description |
---|---|
PythonFunctionRunner<RowData> |
RowDataPythonScalarFunctionOperator.createPythonFunctionRunner(org.apache.beam.sdk.fn.data.FnDataReceiver<byte[]> resultReceiver,
PythonEnvironmentManager pythonEnvironmentManager,
Map<String,String> jobOptions) |
Modifier and Type | Method and Description |
---|---|
void |
AbstractRowDataPythonScalarFunctionOperator.bufferInput(RowData input) |
RowData |
AbstractRowDataPythonScalarFunctionOperator.getFunctionInput(RowData element) |
Modifier and Type | Method and Description |
---|---|
PythonFunctionRunner<RowData> |
RowDataArrowPythonScalarFunctionOperator.createPythonFunctionRunner(org.apache.beam.sdk.fn.data.FnDataReceiver<byte[]> resultReceiver,
PythonEnvironmentManager pythonEnvironmentManager,
Map<String,String> jobOptions) |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataPythonTableFunctionOperator.getFunctionInput(RowData element) |
Modifier and Type | Method and Description |
---|---|
PythonFunctionRunner<RowData> |
RowDataPythonTableFunctionOperator.createPythonFunctionRunner(org.apache.beam.sdk.fn.data.FnDataReceiver<byte[]> resultReceiver,
PythonEnvironmentManager pythonEnvironmentManager,
Map<String,String> jobOptions) |
Modifier and Type | Method and Description |
---|---|
void |
RowDataPythonTableFunctionOperator.bufferInput(RowData input) |
RowData |
RowDataPythonTableFunctionOperator.getFunctionInput(RowData element) |
Modifier and Type | Field and Description |
---|---|
protected Comparator<RowData> |
AbstractTopNFunction.sortKeyComparator |
protected KeySelector<RowData,RowData> |
AbstractTopNFunction.sortKeySelector |
protected KeySelector<RowData,RowData> |
AbstractTopNFunction.sortKeySelector |
Modifier and Type | Method and Description |
---|---|
protected boolean |
AbstractTopNFunction.checkSortKeyInBufferRange(RowData sortKey,
org.apache.flink.table.runtime.operators.rank.TopNBuffer buffer)
Checks whether the record should be put into the buffer.
|
protected void |
AbstractTopNFunction.collectDelete(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectDelete(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectInsert(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectInsert(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectUpdateAfter(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectUpdateAfter(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectUpdateBefore(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectUpdateBefore(Collector<RowData> out,
RowData inputRow,
long rank) |
protected long |
AbstractTopNFunction.initRankEnd(RowData row)
Initialize rank end.
|
void |
UpdatableTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
void |
RetractableTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
AppendOnlyTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
protected void |
AbstractTopNFunction.collectDelete(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectDelete(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectInsert(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectInsert(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectUpdateAfter(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectUpdateAfter(Collector<RowData> out,
RowData inputRow,
long rank) |
protected void |
AbstractTopNFunction.collectUpdateBefore(Collector<RowData> out,
RowData inputRow) |
protected void |
AbstractTopNFunction.collectUpdateBefore(Collector<RowData> out,
RowData inputRow,
long rank) |
void |
UpdatableTopNFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
RetractableTopNFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
AppendOnlyTopNFunction.onTimer(long timestamp,
KeyedProcessFunction.OnTimerContext ctx,
Collector<RowData> out) |
void |
UpdatableTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
void |
RetractableTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context ctx,
Collector<RowData> out) |
void |
AppendOnlyTopNFunction.processElement(RowData input,
KeyedProcessFunction.Context context,
Collector<RowData> out) |
Modifier and Type | Method and Description |
---|---|
void |
SinkOperator.processElement(StreamRecord<RowData> element) |
Constructor and Description |
---|
SinkOperator(SinkFunction<RowData> sinkFunction,
int rowtimeFieldIndex,
ExecutionConfigOptions.NotNullEnforcer notNullEnforcer,
int[] notNullFieldIndices,
String[] allFieldNames) |
Modifier and Type | Method and Description |
---|---|
boolean |
BinaryInMemorySortBuffer.write(RowData record)
Writes a given record to this sort buffer.
|
void |
BinaryExternalSorter.write(RowData current) |
protected void |
BinaryIndexedSortable.writeIndexAndNormalizedKey(RowData record,
long currOffset)
Write of index and normalizedKey.
|
Modifier and Type | Method and Description |
---|---|
static BinaryInMemorySortBuffer |
BinaryInMemorySortBuffer.createBuffer(NormalizedKeyComputer normalizedKeyComputer,
AbstractRowDataSerializer<RowData> inputSerializer,
BinaryRowDataSerializer serializer,
RecordComparator comparator,
MemorySegmentPool memoryPool)
Create a memory sorter in `insert` way.
|
void |
RowTimeSortOperator.onEventTime(InternalTimer<RowData,VoidNamespace> timer) |
void |
ProcTimeSortOperator.onEventTime(InternalTimer<RowData,VoidNamespace> timer) |
void |
RowTimeSortOperator.onProcessingTime(InternalTimer<RowData,VoidNamespace> timer) |
void |
ProcTimeSortOperator.onProcessingTime(InternalTimer<RowData,VoidNamespace> timer) |
void |
StreamSortOperator.processElement(StreamRecord<RowData> element) |
void |
RowTimeSortOperator.processElement(StreamRecord<RowData> element) |
void |
SortLimitOperator.processElement(StreamRecord<RowData> element) |
void |
ProcTimeSortOperator.processElement(StreamRecord<RowData> element) |
void |
SortOperator.processElement(StreamRecord<RowData> element) |
void |
RankOperator.processElement(StreamRecord<RowData> element) |
void |
LimitOperator.processElement(StreamRecord<RowData> element) |
Constructor and Description |
---|
BinaryExternalSorter(Object owner,
MemoryManager memoryManager,
long reservedMemorySize,
IOManager ioManager,
AbstractRowDataSerializer<RowData> inputSerializer,
BinaryRowDataSerializer serializer,
NormalizedKeyComputer normalizedKeyComputer,
RecordComparator comparator,
Configuration conf) |
BinaryExternalSorter(Object owner,
MemoryManager memoryManager,
long reservedMemorySize,
IOManager ioManager,
AbstractRowDataSerializer<RowData> inputSerializer,
BinaryRowDataSerializer serializer,
NormalizedKeyComputer normalizedKeyComputer,
RecordComparator comparator,
Configuration conf,
float startSpillingFraction) |
Modifier and Type | Method and Description |
---|---|
RowData |
ValuesInputFormat.nextRecord(RowData reuse) |
Modifier and Type | Method and Description |
---|---|
RowData |
ValuesInputFormat.nextRecord(RowData reuse) |
Constructor and Description |
---|
ValuesInputFormat(GeneratedInput<GenericInputFormat<RowData>> generatedInput,
RowDataTypeInfo returnType) |
Modifier and Type | Field and Description |
---|---|
protected TimestampedCollector<RowData> |
WindowOperator.collector
This is used for emitting elements with a given timestamp.
|
protected InternalValueState<K,W,RowData> |
WindowOperator.previousState |
Modifier and Type | Method and Description |
---|---|
void |
WindowOperator.processElement(StreamRecord<RowData> record) |
Modifier and Type | Method and Description |
---|---|
Collection<TimeWindow> |
SessionWindowAssigner.assignWindows(RowData element,
long timestamp) |
Collection<TimeWindow> |
SlidingWindowAssigner.assignWindows(RowData element,
long timestamp) |
abstract Collection<W> |
WindowAssigner.assignWindows(RowData element,
long timestamp)
Given the timestamp and element, returns the set of windows into which it should be placed.
|
Collection<CountWindow> |
CountTumblingWindowAssigner.assignWindows(RowData element,
long timestamp) |
Collection<TimeWindow> |
TumblingWindowAssigner.assignWindows(RowData element,
long timestamp) |
Collection<CountWindow> |
CountSlidingWindowAssigner.assignWindows(RowData element,
long timestamp) |
Modifier and Type | Method and Description |
---|---|
RowData |
InternalWindowProcessFunction.Context.getWindowAccumulators(W window)
Gets the accumulators of the given window.
|
Modifier and Type | Method and Description |
---|---|
Collection<W> |
GeneralWindowProcessFunction.assignActualWindows(RowData inputRow,
long timestamp) |
abstract Collection<W> |
InternalWindowProcessFunction.assignActualWindows(RowData inputRow,
long timestamp)
Assigns the input element into the actual windows which the
Trigger should trigger
on. |
Collection<W> |
PanedWindowProcessFunction.assignActualWindows(RowData inputRow,
long timestamp) |
Collection<W> |
MergingWindowProcessFunction.assignActualWindows(RowData inputRow,
long timestamp) |
Collection<W> |
GeneralWindowProcessFunction.assignStateNamespace(RowData inputRow,
long timestamp) |
abstract Collection<W> |
InternalWindowProcessFunction.assignStateNamespace(RowData inputRow,
long timestamp)
Assigns the input element into the state namespace which the input element should be
accumulated/retracted into.
|
Collection<W> |
PanedWindowProcessFunction.assignStateNamespace(RowData inputRow,
long timestamp) |
Collection<W> |
MergingWindowProcessFunction.assignStateNamespace(RowData inputRow,
long timestamp) |
void |
InternalWindowProcessFunction.Context.setWindowAccumulators(W window,
RowData acc)
Sets the accumulators of the given window.
|
Modifier and Type | Method and Description |
---|---|
Long |
BoundedOutOfOrderWatermarkGenerator.currentWatermark(RowData row) |
Modifier and Type | Method and Description |
---|---|
void |
RowTimeMiniBatchAssginerOperator.processElement(StreamRecord<RowData> element) |
void |
ProcTimeMiniBatchAssignerOperator.processElement(StreamRecord<RowData> element) |
void |
WatermarkAssignerOperator.processElement(StreamRecord<RowData> element) |
Modifier and Type | Method and Description |
---|---|
StreamPartitioner<RowData> |
BinaryHashPartitioner.copy() |
Modifier and Type | Method and Description |
---|---|
int |
BinaryHashPartitioner.selectChannel(SerializationDelegate<StreamRecord<RowData>> record) |
Modifier and Type | Method and Description |
---|---|
ArrowWriter<RowData> |
RowDataArrowPythonScalarFunctionRunner.createArrowWriter() |
Modifier and Type | Class and Description |
---|---|
class |
AbstractRowDataSerializer<T extends RowData>
Row serializer, provided paged serialize paged method.
|
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataSerializer.copy(RowData from) |
RowData |
RowDataSerializer.copy(RowData from,
RowData reuse) |
RowData |
RowDataSerializer.createInstance() |
RowData |
RowDataSerializer.deserialize(DataInputView source) |
RowData |
RowDataSerializer.deserialize(RowData reuse,
DataInputView source) |
RowData |
RowDataSerializer.deserializeFromPages(AbstractPagedInputView source) |
RowData |
RowDataSerializer.deserializeFromPages(RowData reuse,
AbstractPagedInputView source) |
RowData |
RowDataSerializer.mapFromPages(AbstractPagedInputView source) |
RowData |
RowDataSerializer.mapFromPages(RowData reuse,
AbstractPagedInputView source) |
Modifier and Type | Method and Description |
---|---|
TypeComparator<RowData> |
RowDataTypeInfo.createComparator(int[] logicalKeyFields,
boolean[] orders,
int logicalFieldOffset,
ExecutionConfig config) |
CompositeType.TypeComparatorBuilder<RowData> |
RowDataTypeInfo.createTypeComparatorBuilder() |
TypeSerializer<RowData> |
RowDataSerializer.duplicate() |
TypeSerializerSchemaCompatibility<RowData> |
RowDataSerializer.RowDataSerializerSnapshot.resolveSchemaCompatibility(TypeSerializer<RowData> newSerializer) |
TypeSerializerSnapshot<RowData> |
RowDataSerializer.snapshotConfiguration() |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataSerializer.copy(RowData from) |
RowData |
RowDataSerializer.copy(RowData from,
RowData reuse) |
RowData |
RowDataSerializer.deserialize(RowData reuse,
DataInputView source) |
RowData |
RowDataSerializer.deserializeFromPages(RowData reuse,
AbstractPagedInputView source) |
RowData |
RowDataSerializer.mapFromPages(RowData reuse,
AbstractPagedInputView source) |
void |
RowDataSerializer.serialize(RowData row,
DataOutputView target) |
int |
RowDataSerializer.serializeToPages(RowData row,
AbstractPagedOutputView target) |
BinaryRowData |
RowDataSerializer.toBinaryRow(RowData row)
Convert
RowData into BinaryRowData . |
Modifier and Type | Method and Description |
---|---|
TypeSerializerSchemaCompatibility<RowData> |
RowDataSerializer.RowDataSerializerSnapshot.resolveSchemaCompatibility(TypeSerializer<RowData> newSerializer) |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataSerializer.deserialize(DataInputView source) |
RowData |
RowDataSerializer.deserialize(RowData reuse,
DataInputView source) |
Modifier and Type | Method and Description |
---|---|
TypeSerializerSchemaCompatibility<RowData> |
RowDataSerializer.RowDataSerializerSnapshot.resolveSchemaCompatibility(TypeSerializer<RowData> newSerializer) |
TypeSerializerSnapshot<RowData> |
RowDataSerializer.snapshotConfiguration() |
Modifier and Type | Method and Description |
---|---|
RowData |
RowDataSerializer.deserialize(RowData reuse,
DataInputView source) |
void |
RowDataSerializer.serialize(RowData row,
DataOutputView target) |
Modifier and Type | Method and Description |
---|---|
TypeSerializerSchemaCompatibility<RowData> |
RowDataSerializer.RowDataSerializerSnapshot.resolveSchemaCompatibility(TypeSerializer<RowData> newSerializer) |
Modifier and Type | Interface and Description |
---|---|
interface |
RowIterator<T extends RowData>
An internal iterator interface which presents a more restrictive API than
Iterator . |
Modifier and Type | Method and Description |
---|---|
void |
ResettableExternalBuffer.add(RowData row) |
void |
ResettableRowBuffer.add(RowData row)
Appends the specified row to the end of this buffer.
|
Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.