Modifier and Type | Method and Description |
---|---|
protected Row |
HBaseRowInputFormat.mapResultToOutType(org.apache.hadoop.hbase.client.Result res) |
Modifier and Type | Method and Description |
---|---|
TableSink<Tuple2<Boolean,Row>> |
HBaseUpsertTableSink.configure(String[] fieldNames,
TypeInformation<?>[] fieldTypes) |
StreamTableSink<Tuple2<Boolean,Row>> |
HBaseTableFactory.createStreamTableSink(Map<String,String> properties) |
StreamTableSource<Row> |
HBaseTableFactory.createStreamTableSource(Map<String,String> properties) |
AsyncTableFunction<Row> |
HBaseTableSource.getAsyncLookupFunction(String[] lookupKeys) |
DataSet<Row> |
HBaseTableSource.getDataSet(ExecutionEnvironment execEnv) |
DataStream<Row> |
HBaseTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
TableFunction<Row> |
HBaseTableSource.getLookupFunction(String[] lookupKeys) |
TypeInformation<Row> |
HBaseRowInputFormat.getProducedType() |
TypeInformation<Row> |
HBaseUpsertTableSink.getRecordType() |
TypeInformation<Row> |
HBaseLookupFunction.getResultType() |
TypeInformation<Row> |
HBaseTableSource.getReturnType() |
Modifier and Type | Method and Description |
---|---|
DataStreamSink<?> |
HBaseUpsertTableSink.consumeDataStream(DataStream<Tuple2<Boolean,Row>> dataStream) |
void |
HBaseUpsertTableSink.emitDataStream(DataStream<Tuple2<Boolean,Row>> dataStream) |
void |
HBaseUpsertSinkFunction.invoke(Tuple2<Boolean,Row> value,
SinkFunction.Context context) |
Modifier and Type | Method and Description |
---|---|
Row |
HBaseReadWriteHelper.parseToRow(org.apache.hadoop.hbase.client.Result result)
Parses HBase
Result into Row . |
Row |
HBaseReadWriteHelper.parseToRow(org.apache.hadoop.hbase.client.Result result,
Object rowKey)
Parses HBase
Result into Row . |
Modifier and Type | Method and Description |
---|---|
org.apache.hadoop.hbase.client.Delete |
HBaseReadWriteHelper.createDeleteMutation(Row row)
Returns an instance of Delete that remove record from HBase table.
|
org.apache.hadoop.hbase.client.Put |
HBaseReadWriteHelper.createPutMutation(Row row)
Returns an instance of Put that writes record to HBase table.
|
Modifier and Type | Method and Description |
---|---|
static TypeInformation<Row> |
Types.ROW_NAMED(String[] fieldNames,
TypeInformation<?>... types)
Returns type information for
Row with fields of the given types and
with given names. |
static TypeInformation<Row> |
Types.ROW(TypeInformation<?>... types)
Returns type information for
Row with fields of the given types. |
Modifier and Type | Method and Description |
---|---|
protected Row |
RowCsvInputFormat.fillRecord(Row reuse,
Object[] parsedValues) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<Row> |
RowCsvInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
protected Row |
RowCsvInputFormat.fillRecord(Row reuse,
Object[] parsedValues) |
Modifier and Type | Method and Description |
---|---|
Row |
JDBCInputFormat.nextRecord(Row row)
Stores the next resultSet row in a tuple.
|
Modifier and Type | Method and Description |
---|---|
Row |
JDBCInputFormat.nextRecord(Row row)
Stores the next resultSet row in a tuple.
|
static void |
JDBCUtils.setRecordToStatement(PreparedStatement upload,
int[] typesArray,
Row row)
Adds a record to the prepared statement.
|
void |
JDBCOutputFormat.writeRecord(Row row) |
Modifier and Type | Method and Description |
---|---|
DataStreamSink<?> |
JDBCAppendTableSink.consumeDataStream(DataStream<Row> dataStream) |
DataStreamSink<?> |
JDBCUpsertTableSink.consumeDataStream(DataStream<Tuple2<Boolean,Row>> dataStream) |
void |
JDBCAppendTableSink.emitDataSet(DataSet<Row> dataSet) |
void |
JDBCAppendTableSink.emitDataStream(DataStream<Row> dataStream) |
void |
JDBCUpsertTableSink.emitDataStream(DataStream<Tuple2<Boolean,Row>> dataStream) |
void |
JDBCUpsertOutputFormat.writeRecord(Tuple2<Boolean,Row> tuple2) |
Modifier and Type | Method and Description |
---|---|
void |
JDBCWriter.addRecord(Tuple2<Boolean,Row> record)
Add record to writer, the writer may cache the data.
|
void |
AppendOnlyWriter.addRecord(Tuple2<Boolean,Row> record) |
void |
UpsertWriter.addRecord(Tuple2<Boolean,Row> record) |
Modifier and Type | Method and Description |
---|---|
TypeComparator<Row> |
RowTypeInfo.createComparator(int[] logicalKeyFields,
boolean[] orders,
int logicalFieldOffset,
ExecutionConfig config) |
TypeSerializer<Row> |
RowTypeInfo.createSerializer(ExecutionConfig config) |
protected CompositeType.TypeComparatorBuilder<Row> |
RowTypeInfo.createTypeComparatorBuilder() |
Modifier and Type | Method and Description |
---|---|
Row |
RowSerializer.copy(Row from) |
Row |
RowSerializer.copy(Row from,
Row reuse) |
Row |
RowSerializer.createInstance() |
Row |
RowSerializer.deserialize(DataInputView source) |
Row |
RowSerializer.deserialize(Row reuse,
DataInputView source) |
Row |
RowComparator.readWithKeyDenormalization(Row reuse,
DataInputView source) |
Modifier and Type | Method and Description |
---|---|
TypeSerializer<Row> |
RowSerializer.duplicate() |
TypeComparator<Row> |
RowComparator.duplicate() |
TypeSerializerSchemaCompatibility<Row> |
RowSerializer.RowSerializerConfigSnapshot.resolveSchemaCompatibility(TypeSerializer<Row> newSerializer)
Deprecated.
|
TypeSerializerSnapshot<Row> |
RowSerializer.snapshotConfiguration() |
Modifier and Type | Method and Description |
---|---|
int |
RowComparator.compare(Row first,
Row second) |
Row |
RowSerializer.copy(Row from) |
Row |
RowSerializer.copy(Row from,
Row reuse) |
Row |
RowSerializer.deserialize(Row reuse,
DataInputView source) |
boolean |
RowComparator.equalToReference(Row candidate) |
int |
RowComparator.hash(Row record) |
void |
RowComparator.putNormalizedKey(Row record,
MemorySegment target,
int offset,
int numBytes) |
Row |
RowComparator.readWithKeyDenormalization(Row reuse,
DataInputView source) |
void |
RowSerializer.serialize(Row record,
DataOutputView target) |
void |
RowComparator.setReference(Row toCompare) |
static void |
NullMaskUtils.writeNullMask(int len,
Row value,
DataOutputView target) |
void |
RowComparator.writeWithKeyNormalization(Row record,
DataOutputView target) |
Modifier and Type | Method and Description |
---|---|
int |
RowComparator.compareToReference(TypeComparator<Row> referencedComparator) |
TypeSerializerSchemaCompatibility<Row> |
RowSerializer.RowSerializerConfigSnapshot.resolveSchemaCompatibility(TypeSerializer<Row> newSerializer)
Deprecated.
|
Modifier and Type | Method and Description |
---|---|
protected Object[] |
CassandraRowOutputFormat.extractFields(Row record) |
Modifier and Type | Method and Description |
---|---|
TableSink<Row> |
HiveTableSink.configure(String[] fieldNames,
TypeInformation<?>[] fieldTypes) |
TableSink<Row> |
HiveTableFactory.createTableSink(Map<String,String> properties) |
TableSink<Row> |
HiveTableFactory.createTableSink(ObjectPath tablePath,
CatalogTable table) |
OutputFormat<Row> |
HiveTableSink.getOutputFormat() |
Modifier and Type | Method and Description |
---|---|
LinkedHashMap<String,String> |
HivePartitionComputer.generatePartValues(Row in) |
Modifier and Type | Method and Description |
---|---|
Row |
AvroRowDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DeserializationSchema<Row> |
AvroRowFormatFactory.createDeserializationSchema(Map<String,String> properties) |
SerializationSchema<Row> |
AvroRowFormatFactory.createSerializationSchema(Map<String,String> properties) |
TypeInformation<Row> |
AvroRowDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
byte[] |
AvroRowSerializationSchema.serialize(Row row) |
Modifier and Type | Method and Description |
---|---|
static <T extends org.apache.avro.specific.SpecificRecord> |
AvroSchemaConverter.convertToTypeInfo(Class<T> avroClass)
Converts an Avro class into a nested row structure with deterministic field order and data
types that are compatible with Flink's Table & SQL API.
|
Modifier and Type | Method and Description |
---|---|
Row |
CsvRowDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DeserializationSchema<Row> |
CsvRowFormatFactory.createDeserializationSchema(Map<String,String> properties) |
SerializationSchema<Row> |
CsvRowFormatFactory.createSerializationSchema(Map<String,String> properties) |
TypeInformation<Row> |
CsvRowDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
CsvRowDeserializationSchema.isEndOfStream(Row nextElement) |
byte[] |
CsvRowSerializationSchema.serialize(Row row) |
Constructor and Description |
---|
Builder(TypeInformation<Row> typeInfo)
Creates a CSV deserialization schema for the given
TypeInformation with
optional parameters. |
Builder(TypeInformation<Row> typeInfo)
Creates a
CsvRowSerializationSchema expecting the given TypeInformation . |
Modifier and Type | Method and Description |
---|---|
Row |
JsonRowDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
DeserializationSchema<Row> |
JsonRowFormatFactory.createDeserializationSchema(Map<String,String> properties) |
SerializationSchema<Row> |
JsonRowFormatFactory.createSerializationSchema(Map<String,String> properties) |
TypeInformation<Row> |
JsonRowDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
JsonRowDeserializationSchema.isEndOfStream(Row nextElement) |
byte[] |
JsonRowSerializationSchema.serialize(Row row) |
Constructor and Description |
---|
Builder(TypeInformation<Row> typeInfo)
Creates a JSON deserialization schema for the given type information.
|
Builder(TypeInformation<Row> typeInfo)
Creates a JSON serialization schema for the given type information.
|
JsonRowDeserializationSchema(TypeInformation<Row> typeInfo)
Deprecated.
Use the provided
JsonRowDeserializationSchema.Builder instead. |
JsonRowSerializationSchema(TypeInformation<Row> typeInfo)
Deprecated.
Use the provided
JsonRowSerializationSchema.Builder instead. |
Modifier and Type | Method and Description |
---|---|
protected Row |
ParquetRowInputFormat.convert(Row row) |
Modifier and Type | Method and Description |
---|---|
TableSource<Row> |
ParquetTableSource.applyPredicate(List<Expression> predicates) |
DataSet<Row> |
ParquetTableSource.getDataSet(ExecutionEnvironment executionEnvironment) |
TypeInformation<Row> |
ParquetRowInputFormat.getProducedType() |
TypeInformation<Row> |
ParquetTableSource.getReturnType() |
TableSource<Row> |
ParquetTableSource.projectFields(int[] fields) |
Modifier and Type | Method and Description |
---|---|
protected E |
ParquetPojoInputFormat.convert(Row row) |
protected Map |
ParquetMapInputFormat.convert(Row row) |
protected abstract E |
ParquetInputFormat.convert(Row row)
This ParquetInputFormat read parquet record as Row by default.
|
protected Row |
ParquetRowInputFormat.convert(Row row) |
Modifier and Type | Method and Description |
---|---|
Row |
RowMaterializer.getCurrentRecord() |
Row |
RowConverter.getCurrentRow() |
Modifier and Type | Method and Description |
---|---|
org.apache.parquet.io.api.RecordMaterializer<Row> |
RowReadSupport.prepareForRead(Configuration configuration,
Map<String,String> keyValueMetaData,
org.apache.parquet.schema.MessageType fileSchema,
org.apache.parquet.hadoop.api.ReadSupport.ReadContext readContext) |
Modifier and Type | Method and Description |
---|---|
Row |
MapperAdapter.map(Row row) |
Row |
ModelMapperAdapter.map(Row row) |
abstract Row |
Mapper.map(Row row)
Map a row to a new row.
|
Modifier and Type | Method and Description |
---|---|
Row |
MapperAdapter.map(Row row) |
Row |
ModelMapperAdapter.map(Row row) |
abstract Row |
Mapper.map(Row row)
Map a row to a new row.
|
Modifier and Type | Method and Description |
---|---|
abstract void |
ModelMapper.loadModel(List<Row> modelRows)
Load the model from the list of rows.
|
Modifier and Type | Method and Description |
---|---|
List<Row> |
RowsModelSource.getModelRows(RuntimeContext runtimeContext) |
List<Row> |
BroadcastVariableModelSource.getModelRows(RuntimeContext runtimeContext) |
List<Row> |
ModelSource.getModelRows(RuntimeContext runtimeContext)
Get the rows that containing the model.
|
Constructor and Description |
---|
RowsModelSource(List<Row> modelRows)
Construct a RowsModelSource with the given rows containing a model.
|
Modifier and Type | Method and Description |
---|---|
Row |
OrcRowSplitReader.nextRecord(Row reuse) |
Modifier and Type | Method and Description |
---|---|
TableSource<Row> |
OrcTableSource.applyPredicate(List<Expression> predicates) |
DataSet<Row> |
OrcTableSource.getDataSet(ExecutionEnvironment execEnv) |
TypeInformation<Row> |
OrcRowInputFormat.getProducedType() |
TypeInformation<Row> |
OrcTableSource.getReturnType() |
TableSource<Row> |
OrcTableSource.projectFields(int[] selectedFields) |
Modifier and Type | Method and Description |
---|---|
Row |
OrcRowSplitReader.nextRecord(Row reuse) |
Modifier and Type | Method and Description |
---|---|
Row |
StreamSQLTestProgram.KillMapper.map(Row value) |
Row |
BatchSQLTestProgram.DataGenerator.next() |
Modifier and Type | Method and Description |
---|---|
DataStream<Row> |
StreamSQLTestProgram.GeneratorTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
InputFormat<Row,?> |
BatchSQLTestProgram.GeneratorTableSource.getInputFormat() |
TypeInformation<Row> |
StreamSQLTestProgram.Generator.getProducedType() |
TypeInformation<Row> |
StreamSQLTestProgram.GeneratorTableSource.getReturnType() |
Modifier and Type | Method and Description |
---|---|
String |
StreamSQLTestProgram.KeyBucketAssigner.getBucketId(Row element,
BucketAssigner.Context context) |
Row |
StreamSQLTestProgram.KillMapper.map(Row value) |
Modifier and Type | Method and Description |
---|---|
void |
StreamSQLTestProgram.Generator.run(SourceFunction.SourceContext<Row> ctx) |
Modifier and Type | Method and Description |
---|---|
protected CassandraSink<Row> |
CassandraSink.CassandraRowSinkBuilder.createSink() |
protected CassandraSink<Row> |
CassandraSink.CassandraRowSinkBuilder.createWriteAheadSink() |
TypeInformation<Row> |
CassandraAppendTableSink.getOutputType() |
Modifier and Type | Method and Description |
---|---|
protected Object[] |
CassandraRowSink.extract(Row record) |
Modifier and Type | Method and Description |
---|---|
DataStreamSink<?> |
CassandraAppendTableSink.consumeDataStream(DataStream<Row> dataStream) |
void |
CassandraAppendTableSink.emitDataStream(DataStream<Row> dataStream) |
protected boolean |
CassandraRowWriteAheadSink.sendValues(Iterable<Row> values,
long checkpointId,
long timestamp) |
Constructor and Description |
---|
CassandraRowSinkBuilder(DataStream<Row> input,
TypeInformation<Row> typeInfo,
TypeSerializer<Row> serializer) |
CassandraRowSinkBuilder(DataStream<Row> input,
TypeInformation<Row> typeInfo,
TypeSerializer<Row> serializer) |
CassandraRowSinkBuilder(DataStream<Row> input,
TypeInformation<Row> typeInfo,
TypeSerializer<Row> serializer) |
CassandraRowWriteAheadSink(String insertQuery,
TypeSerializer<Row> serializer,
ClusterBuilder builder,
CheckpointCommitter committer) |
Modifier and Type | Method and Description |
---|---|
TableSink<Tuple2<Boolean,Row>> |
ElasticsearchUpsertTableSinkBase.configure(String[] fieldNames,
TypeInformation<?>[] fieldTypes) |
protected abstract SinkFunction<Tuple2<Boolean,Row>> |
ElasticsearchUpsertTableSinkBase.createSinkFunction(List<ElasticsearchUpsertTableSinkBase.Host> hosts,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.ElasticsearchUpsertSinkFunction upsertFunction) |
StreamTableSink<Tuple2<Boolean,Row>> |
ElasticsearchUpsertTableSinkFactoryBase.createStreamTableSink(Map<String,String> properties) |
TypeInformation<Tuple2<Boolean,Row>> |
ElasticsearchUpsertTableSinkBase.getOutputType() |
TypeInformation<Row> |
ElasticsearchUpsertTableSinkBase.getRecordType() |
Modifier and Type | Method and Description |
---|---|
DataStreamSink<?> |
ElasticsearchUpsertTableSinkBase.consumeDataStream(DataStream<Tuple2<Boolean,Row>> dataStream) |
protected abstract ElasticsearchUpsertTableSinkBase |
ElasticsearchUpsertTableSinkBase.copy(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
protected abstract ElasticsearchUpsertTableSinkBase |
ElasticsearchUpsertTableSinkFactoryBase.createElasticsearchUpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
void |
ElasticsearchUpsertTableSinkBase.emitDataStream(DataStream<Tuple2<Boolean,Row>> dataStream) |
void |
ElasticsearchUpsertTableSinkBase.ElasticsearchUpsertSinkFunction.process(Tuple2<Boolean,Row> element,
RuntimeContext ctx,
RequestIndexer indexer) |
Constructor and Description |
---|
ElasticsearchUpsertSinkFunction(String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory,
int[] keyFieldIndices) |
ElasticsearchUpsertTableSinkBase(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
Modifier and Type | Method and Description |
---|---|
protected SinkFunction<Tuple2<Boolean,Row>> |
Elasticsearch6UpsertTableSink.createSinkFunction(List<ElasticsearchUpsertTableSinkBase.Host> hosts,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.ElasticsearchUpsertSinkFunction upsertSinkFunction) |
Modifier and Type | Method and Description |
---|---|
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch6UpsertTableSink.copy(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch6UpsertTableSinkFactory.createElasticsearchUpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Constructor and Description |
---|
Elasticsearch6UpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Modifier and Type | Method and Description |
---|---|
protected SinkFunction<Tuple2<Boolean,Row>> |
Elasticsearch7UpsertTableSink.createSinkFunction(List<ElasticsearchUpsertTableSinkBase.Host> hosts,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.ElasticsearchUpsertSinkFunction upsertSinkFunction) |
Modifier and Type | Method and Description |
---|---|
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch7UpsertTableSink.copy(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions,
ElasticsearchUpsertTableSinkBase.RequestFactory requestFactory) |
protected ElasticsearchUpsertTableSinkBase |
Elasticsearch7UpsertTableSinkFactory.createElasticsearchUpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String docType,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Constructor and Description |
---|
Elasticsearch7UpsertTableSink(boolean isAppendOnly,
TableSchema schema,
List<ElasticsearchUpsertTableSinkBase.Host> hosts,
String index,
String keyDelimiter,
String keyNullLiteral,
SerializationSchema<Row> serializationSchema,
org.elasticsearch.common.xcontent.XContentType contentType,
ActionRequestFailureHandler failureHandler,
Map<ElasticsearchUpsertTableSinkBase.SinkOption,String> sinkOptions) |
Modifier and Type | Field and Description |
---|---|
protected Optional<FlinkKafkaPartitioner<Row>> |
KafkaTableSinkBase.partitioner
Partitioner to select Kafka partition for each item.
|
protected SerializationSchema<Row> |
KafkaTableSinkBase.serializationSchema
Serialization schema for encoding records to Kafka.
|
Modifier and Type | Method and Description |
---|---|
protected FlinkKafkaConsumerBase<Row> |
Kafka08TableSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema) |
protected FlinkKafkaConsumerBase<Row> |
KafkaTableSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema) |
protected FlinkKafkaConsumerBase<Row> |
Kafka011TableSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema) |
protected FlinkKafkaConsumerBase<Row> |
Kafka010TableSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema) |
protected FlinkKafkaConsumerBase<Row> |
Kafka09TableSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema) |
protected abstract FlinkKafkaConsumerBase<Row> |
KafkaTableSourceBase.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema)
Creates a version-specific Kafka consumer.
|
protected FlinkKafkaProducerBase<Row> |
Kafka08TableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected SinkFunction<Row> |
KafkaTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected SinkFunction<Row> |
Kafka011TableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka010TableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka09TableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected abstract SinkFunction<Row> |
KafkaTableSinkBase.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner)
Returns the version-specific Kafka producer.
|
StreamTableSink<Row> |
KafkaTableSourceSinkFactoryBase.createStreamTableSink(Map<String,String> properties) |
StreamTableSource<Row> |
KafkaTableSourceSinkFactoryBase.createStreamTableSource(Map<String,String> properties) |
DataStream<Row> |
KafkaTableSourceBase.getDataStream(StreamExecutionEnvironment env)
NOTE: This method is for internal use only for defining a TableSource.
|
DeserializationSchema<Row> |
KafkaTableSourceBase.getDeserializationSchema()
Returns the deserialization schema.
|
protected FlinkKafkaConsumerBase<Row> |
KafkaTableSourceBase.getKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema)
Returns a version-specific Kafka consumer with the start position configured.
|
TypeInformation<Row> |
KafkaTableSinkBase.getOutputType() |
TypeInformation<Row> |
KafkaTableSourceBase.getReturnType() |
Modifier and Type | Method and Description |
---|---|
DataStreamSink<?> |
KafkaTableSinkBase.consumeDataStream(DataStream<Row> dataStream) |
protected FlinkKafkaConsumerBase<Row> |
Kafka08TableSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema) |
protected FlinkKafkaConsumerBase<Row> |
KafkaTableSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema) |
protected FlinkKafkaConsumerBase<Row> |
Kafka011TableSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema) |
protected FlinkKafkaConsumerBase<Row> |
Kafka010TableSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema) |
protected FlinkKafkaConsumerBase<Row> |
Kafka09TableSource.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema) |
protected abstract FlinkKafkaConsumerBase<Row> |
KafkaTableSourceBase.createKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema)
Creates a version-specific Kafka consumer.
|
protected FlinkKafkaProducerBase<Row> |
Kafka08TableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka08TableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected SinkFunction<Row> |
KafkaTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected SinkFunction<Row> |
KafkaTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected SinkFunction<Row> |
Kafka011TableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected SinkFunction<Row> |
Kafka011TableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka010TableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka010TableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka09TableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka09TableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner) |
protected abstract SinkFunction<Row> |
KafkaTableSinkBase.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner)
Returns the version-specific Kafka producer.
|
protected abstract SinkFunction<Row> |
KafkaTableSinkBase.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner)
Returns the version-specific Kafka producer.
|
protected KafkaTableSinkBase |
Kafka08TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka08TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
KafkaTableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
KafkaTableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka011TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka011TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka010TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka010TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka09TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected KafkaTableSinkBase |
Kafka09TableSourceSinkFactory.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
protected abstract KafkaTableSinkBase |
KafkaTableSourceSinkFactoryBase.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema)
Constructs the version-specific Kafka table sink.
|
protected abstract KafkaTableSinkBase |
KafkaTableSourceSinkFactoryBase.createKafkaTableSink(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema)
Constructs the version-specific Kafka table sink.
|
protected KafkaTableSourceBase |
Kafka08TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected KafkaTableSourceBase |
KafkaTableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected KafkaTableSourceBase |
Kafka011TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected KafkaTableSourceBase |
Kafka010TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected KafkaTableSourceBase |
Kafka09TableSourceSinkFactory.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets) |
protected abstract KafkaTableSourceBase |
KafkaTableSourceSinkFactoryBase.createKafkaTableSource(TableSchema schema,
Optional<String> proctimeAttribute,
List<RowtimeAttributeDescriptor> rowtimeAttributeDescriptors,
Map<String,String> fieldMapping,
String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
StartupMode startupMode,
Map<KafkaTopicPartition,Long> specificStartupOffsets)
Constructs the version-specific Kafka table source.
|
void |
KafkaTableSinkBase.emitDataStream(DataStream<Row> dataStream) |
protected FlinkKafkaConsumerBase<Row> |
KafkaTableSourceBase.getKafkaConsumer(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema)
Returns a version-specific Kafka consumer with the start position configured.
|
Modifier and Type | Method and Description |
---|---|
static List<Row> |
TableUtils.collectToList(Table table)
Convert Flink table to Java list.
|
static TypeInformation<Row> |
Types.ROW(String[] fieldNames,
TypeInformation<?>[] types)
Deprecated.
Returns type information for
Row with fields of the given types and with given names. |
static TypeInformation<Row> |
Types.ROW(TypeInformation<?>... types)
Deprecated.
Returns type information for
Row with fields of the given types. |
TypeInformation<Row> |
TableSchema.toRowType()
Deprecated.
Use
TableSchema.toRowDataType() instead. |
Modifier and Type | Method and Description |
---|---|
static String[] |
CliUtils.rowToString(Row row) |
Modifier and Type | Method and Description |
---|---|
TypedResult<List<Tuple2<Boolean,Row>>> |
Executor.retrieveResultChanges(String sessionId,
String resultId)
Asks for the next changelog results (non-blocking).
|
List<Row> |
Executor.retrieveResultPage(String resultId,
int page)
Returns the rows that are part of the current page or throws an exception if the snapshot has been expired.
|
Modifier and Type | Method and Description |
---|---|
OutputFormat<Row> |
CollectBatchTableSink.getOutputFormat() |
TupleTypeInfo<Tuple2<Boolean,Row>> |
CollectStreamTableSink.getOutputType() |
TypeInformation<Row> |
CollectStreamTableSink.getRecordType() |
TypeSerializer<Row> |
CollectBatchTableSink.getSerializer()
Returns the serializer for deserializing the collected result.
|
TypedResult<List<Tuple2<Boolean,Row>>> |
LocalExecutor.retrieveResultChanges(String sessionId,
String resultId) |
List<Row> |
LocalExecutor.retrieveResultPage(String resultId,
int page) |
Modifier and Type | Method and Description |
---|---|
DataStreamSink<?> |
CollectStreamTableSink.consumeDataStream(DataStream<Tuple2<Boolean,Row>> stream) |
void |
CollectBatchTableSink.emitDataSet(DataSet<Row> dataSet) |
void |
CollectStreamTableSink.emitDataStream(DataStream<Tuple2<Boolean,Row>> stream) |
Constructor and Description |
---|
CollectBatchTableSink(String accumulatorName,
TypeSerializer<Row> serializer,
TableSchema tableSchema) |
CollectStreamTableSink(InetAddress targetAddress,
int targetPort,
TypeSerializer<Tuple2<Boolean,Row>> serializer,
TableSchema tableSchema) |
Modifier and Type | Method and Description |
---|---|
protected List<Row> |
MaterializedCollectStreamResult.getMaterializedTable() |
TypedResult<List<Tuple2<Boolean,Row>>> |
ChangelogResult.retrieveChanges()
Retrieves the available result records.
|
TypedResult<List<Tuple2<Boolean,Row>>> |
ChangelogCollectStreamResult.retrieveChanges() |
List<Row> |
MaterializedCollectStreamResult.retrievePage(int page) |
List<Row> |
MaterializedResult.retrievePage(int page)
Retrieves a page of a snapshotted result.
|
List<Row> |
MaterializedCollectBatchResult.retrievePage(int page) |
Modifier and Type | Method and Description |
---|---|
protected void |
MaterializedCollectStreamResult.processRecord(Tuple2<Boolean,Row> change) |
protected abstract void |
CollectStreamResult.processRecord(Tuple2<Boolean,Row> change) |
protected void |
ChangelogCollectStreamResult.processRecord(Tuple2<Boolean,Row> change) |
Modifier and Type | Method and Description |
---|---|
Csv |
Csv.schema(TypeInformation<Row> schemaType)
Sets the format schema with field names and the types.
|
Json |
Json.schema(TypeInformation<Row> schemaType)
Sets the schema using type information.
|
Modifier and Type | Method and Description |
---|---|
Row |
RowPartitionComputer.projectColumnsToWrite(Row in) |
Modifier and Type | Method and Description |
---|---|
LinkedHashMap<String,String> |
RowPartitionComputer.generatePartValues(Row in) |
Row |
RowPartitionComputer.projectColumnsToWrite(Row in) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<Row> |
ReplicateRows.getResultType() |
Modifier and Type | Method and Description |
---|---|
Row |
PythonScalarFunctionOperator.getUdfInput(org.apache.flink.table.runtime.types.CRow element) |
Modifier and Type | Method and Description |
---|---|
PythonFunctionRunner<Row> |
PythonScalarFunctionOperator.createPythonFunctionRunner(org.apache.beam.sdk.fn.data.FnDataReceiver<Row> resultReceiver,
PythonEnvironmentManager pythonEnvironmentManager) |
Modifier and Type | Method and Description |
---|---|
PythonFunctionRunner<Row> |
PythonScalarFunctionOperator.createPythonFunctionRunner(org.apache.beam.sdk.fn.data.FnDataReceiver<Row> resultReceiver,
PythonEnvironmentManager pythonEnvironmentManager) |
Constructor and Description |
---|
PythonScalarFunctionRunner(String taskName,
org.apache.beam.sdk.fn.data.FnDataReceiver<Row> resultReceiver,
PythonFunctionInfo[] scalarFunctions,
PythonEnvironmentManager environmentManager,
RowType inputType,
RowType outputType) |
Modifier and Type | Method and Description |
---|---|
TableSink<Row> |
CsvTableSink.configure(String[] fieldNames,
TypeInformation<?>[] fieldTypes) |
BatchTableSink<Row> |
CsvBatchTableSinkFactory.createBatchTableSink(Map<String,String> properties) |
StreamTableSink<Row> |
CsvAppendTableSinkFactory.createStreamTableSink(Map<String,String> properties) |
Modifier and Type | Method and Description |
---|---|
String |
CsvTableSink.CsvFormatter.map(Row row) |
Modifier and Type | Method and Description |
---|---|
DataStreamSink<?> |
CsvTableSink.consumeDataStream(DataStream<Row> dataStream) |
void |
CsvTableSink.emitDataSet(DataSet<Row> dataSet) |
void |
CsvTableSink.emitDataStream(DataStream<Row> dataStream) |
Modifier and Type | Method and Description |
---|---|
BatchTableSource<Row> |
CsvBatchTableSourceFactory.createBatchTableSource(Map<String,String> properties) |
StreamTableSource<Row> |
CsvAppendTableSourceFactory.createStreamTableSource(Map<String,String> properties) |
AsyncTableFunction<Row> |
CsvTableSource.getAsyncLookupFunction(String[] lookupKeys) |
DataSet<Row> |
CsvTableSource.getDataSet(ExecutionEnvironment execEnv) |
DataStream<Row> |
CsvTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
TableFunction<Row> |
CsvTableSource.getLookupFunction(String[] lookupKeys) |
TypeInformation<Row> |
CsvTableSource.CsvLookupFunction.getResultType() |
Modifier and Type | Method and Description |
---|---|
abstract Watermark |
PunctuatedWatermarkAssigner.getWatermark(Row row,
long timestamp)
Returns the watermark for the current row or null if no watermark should be generated.
|
Modifier and Type | Method and Description |
---|---|
static Row |
Row.copy(Row row)
Creates a new Row which copied from another row.
|
static Row |
Row.join(Row first,
Row... remainings)
Creates a new Row which fields are copied from the other rows.
|
static Row |
Row.of(Object... values)
Creates a new Row and assigns the given values to the Row's fields.
|
static Row |
Row.project(Row row,
int[] fields)
Creates a new Row with projected fields from another row.
|
Modifier and Type | Method and Description |
---|---|
static Row |
Row.copy(Row row)
Creates a new Row which copied from another row.
|
static Row |
Row.join(Row first,
Row... remainings)
Creates a new Row which fields are copied from the other rows.
|
static Row |
Row.join(Row first,
Row... remainings)
Creates a new Row which fields are copied from the other rows.
|
static Row |
Row.project(Row row,
int[] fields)
Creates a new Row with projected fields from another row.
|
Modifier and Type | Method and Description |
---|---|
Row |
TransactionRowInputFormat.nextRecord(Row reuse) |
Modifier and Type | Method and Description |
---|---|
Row |
TransactionRowInputFormat.nextRecord(Row reuse) |
Modifier and Type | Method and Description |
---|---|
TableSink<Row> |
SpendReportTableSink.configure(String[] fieldNames,
TypeInformation<?>[] fieldTypes) |
DataStream<Row> |
UnboundedTransactionTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
InputFormat<Row,?> |
BoundedTransactionTableSource.getInputFormat() |
Modifier and Type | Method and Description |
---|---|
void |
SpendReportTableSink.emitDataSet(DataSet<Row> dataSet) |
void |
SpendReportTableSink.emitDataStream(DataStream<Row> dataStream) |
Copyright © 2014–2020 The Apache Software Foundation. All rights reserved.