@Internal public abstract class KafkaTableSinkBase extends Object implements AppendStreamTableSink<Row>
AppendStreamTableSink
.
The version-specific Kafka consumers need to extend this class and
override createKafkaProducer(String, Properties, SerializationSchema, Optional)
}.
Modifier and Type | Field and Description |
---|---|
protected Optional<FlinkKafkaPartitioner<Row>> |
partitioner
Partitioner to select Kafka partition for each item.
|
protected Properties |
properties
Properties for the Kafka producer.
|
protected SerializationSchema<Row> |
serializationSchema
Serialization schema for encoding records to Kafka.
|
protected String |
topic
The Kafka topic to write to.
|
Modifier | Constructor and Description |
---|---|
protected |
KafkaTableSinkBase(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
Modifier and Type | Method and Description |
---|---|
KafkaTableSinkBase |
configure(String[] fieldNames,
TypeInformation<?>[] fieldTypes)
Returns a copy of this
TableSink configured with the field names and types of the
table to emit. |
DataStreamSink<?> |
consumeDataStream(DataStream<Row> dataStream)
Consumes the DataStream and return the sink transformation
DataStreamSink . |
protected abstract SinkFunction<Row> |
createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner)
Returns the version-specific Kafka producer.
|
void |
emitDataStream(DataStream<Row> dataStream)
Emits the DataStream.
|
boolean |
equals(Object o) |
String[] |
getFieldNames() |
TypeInformation<?>[] |
getFieldTypes() |
TypeInformation<Row> |
getOutputType() |
int |
hashCode() |
clone, finalize, getClass, notify, notifyAll, toString, wait, wait, wait
getConsumedDataType, getTableSchema
protected final String topic
protected final Properties properties
protected final SerializationSchema<Row> serializationSchema
protected final Optional<FlinkKafkaPartitioner<Row>> partitioner
protected KafkaTableSinkBase(TableSchema schema, String topic, Properties properties, Optional<FlinkKafkaPartitioner<Row>> partitioner, SerializationSchema<Row> serializationSchema)
protected abstract SinkFunction<Row> createKafkaProducer(String topic, Properties properties, SerializationSchema<Row> serializationSchema, Optional<FlinkKafkaPartitioner<Row>> partitioner)
topic
- Kafka topic to produce to.properties
- Properties for the Kafka producer.serializationSchema
- Serialization schema to use to create Kafka records.partitioner
- Partitioner to select Kafka partition.public DataStreamSink<?> consumeDataStream(DataStream<Row> dataStream)
StreamTableSink
DataStreamSink
.
The returned DataStreamSink
will be used to set resources for the sink operator.consumeDataStream
in interface StreamTableSink<Row>
public void emitDataStream(DataStream<Row> dataStream)
StreamTableSink
emitDataStream
in interface StreamTableSink<Row>
public TypeInformation<Row> getOutputType()
getOutputType
in interface TableSink<Row>
public String[] getFieldNames()
getFieldNames
in interface TableSink<Row>
public TypeInformation<?>[] getFieldTypes()
getFieldTypes
in interface TableSink<Row>
public KafkaTableSinkBase configure(String[] fieldNames, TypeInformation<?>[] fieldTypes)
TableSink
TableSink
configured with the field names and types of the
table to emit.Copyright © 2014–2020 The Apache Software Foundation. All rights reserved.