@Internal public abstract class KafkaTableSinkBase extends Object implements org.apache.flink.table.sinks.AppendStreamTableSink<Row>
AppendStreamTableSink
.
The version-specific Kafka consumers need to extend this class and
override createKafkaProducer(String, Properties, SerializationSchema, Optional)
}.
Modifier and Type | Field and Description |
---|---|
protected String[] |
fieldNames |
protected TypeInformation[] |
fieldTypes |
protected Optional<FlinkKafkaPartitioner<Row>> |
partitioner
Partitioner to select Kafka partition for each item.
|
protected Properties |
properties
Properties for the Kafka producer.
|
protected Optional<SerializationSchema<Row>> |
serializationSchema
Serialization schema for encoding records to Kafka.
|
protected String |
topic
The Kafka topic to write to.
|
Modifier | Constructor and Description |
---|---|
|
KafkaTableSinkBase(String topic,
Properties properties,
FlinkKafkaPartitioner<Row> partitioner)
Deprecated.
Use table descriptors instead of implementation-specific classes.
|
protected |
KafkaTableSinkBase(TableSchema schema,
String topic,
Properties properties,
Optional<FlinkKafkaPartitioner<Row>> partitioner,
SerializationSchema<Row> serializationSchema) |
Modifier and Type | Method and Description |
---|---|
KafkaTableSinkBase |
configure(String[] fieldNames,
TypeInformation<?>[] fieldTypes) |
protected KafkaTableSinkBase |
createCopy()
Deprecated.
|
protected abstract SinkFunction<Row> |
createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
Optional<FlinkKafkaPartitioner<Row>> partitioner)
Returns the version-specific Kafka producer.
|
protected SerializationSchema<Row> |
createSerializationSchema(RowTypeInfo rowSchema)
Deprecated.
Use the constructor to pass a serialization schema instead.
|
void |
emitDataStream(DataStream<Row> dataStream) |
boolean |
equals(Object o) |
String[] |
getFieldNames() |
TypeInformation<?>[] |
getFieldTypes() |
TypeInformation<Row> |
getOutputType() |
int |
hashCode() |
protected final String topic
protected final Properties properties
protected Optional<SerializationSchema<Row>> serializationSchema
protected final Optional<FlinkKafkaPartitioner<Row>> partitioner
protected String[] fieldNames
protected TypeInformation[] fieldTypes
protected KafkaTableSinkBase(TableSchema schema, String topic, Properties properties, Optional<FlinkKafkaPartitioner<Row>> partitioner, SerializationSchema<Row> serializationSchema)
@Deprecated public KafkaTableSinkBase(String topic, Properties properties, FlinkKafkaPartitioner<Row> partitioner)
topic
- Kafka topic to write to.properties
- Properties for the Kafka producer.partitioner
- Partitioner to select Kafka partition for each itemprotected abstract SinkFunction<Row> createKafkaProducer(String topic, Properties properties, SerializationSchema<Row> serializationSchema, Optional<FlinkKafkaPartitioner<Row>> partitioner)
topic
- Kafka topic to produce to.properties
- Properties for the Kafka producer.serializationSchema
- Serialization schema to use to create Kafka records.partitioner
- Partitioner to select Kafka partition.@Deprecated protected SerializationSchema<Row> createSerializationSchema(RowTypeInfo rowSchema)
rowSchema
- the schema of the row to serialize.@Deprecated protected KafkaTableSinkBase createCopy()
public void emitDataStream(DataStream<Row> dataStream)
public TypeInformation<Row> getOutputType()
getOutputType
in interface org.apache.flink.table.sinks.TableSink<Row>
public String[] getFieldNames()
getFieldNames
in interface org.apache.flink.table.sinks.TableSink<Row>
public TypeInformation<?>[] getFieldTypes()
getFieldTypes
in interface org.apache.flink.table.sinks.TableSink<Row>
public KafkaTableSinkBase configure(String[] fieldNames, TypeInformation<?>[] fieldTypes)
configure
in interface org.apache.flink.table.sinks.TableSink<Row>
Copyright © 2014–2020 The Apache Software Foundation. All rights reserved.