pyflink.datastream.connectors.kafka.KafkaSink#
- class KafkaSink(j_kafka_sink, transformer: Optional[pyflink.datastream.connectors.base.StreamTransformer] = None)[source]#
Flink Sink to produce data into a Kafka topic. The sink supports all delivery guarantees described by
DeliveryGuarantee
.DeliveryGuarantee.NONE
does not provide any guarantees: messages may be lost in case of issues on the Kafka broker and messages may be duplicated in case of a Flink failure.DeliveryGuarantee.AT_LEAST_ONCE
the sink will wait for all outstanding records in the Kafka buffers to be acknowledged by the Kafka producer on a checkpoint. No messages will be lost in case of any issue with the Kafka brokers but messages may be duplicated when Flink restarts.DeliveryGuarantee.EXACTLY_ONCE
: In this mode the KafkaSink will write all messages in a Kafka transaction that will be committed to Kafka on a checkpoint. Thus, if the consumer reads only committed data (see Kafka consumer configisolation.level
), no duplicates will be seen in case of a Flink restart. However, this delays record writing effectively until a checkpoint is written, so adjust the checkpoint duration accordingly. Please ensure that you use unique transactional id prefixes across your applications running on the same Kafka cluster such that multiple running jobs do not interfere in their transactions! Additionally, it is highly recommended to tweak Kafka transaction timeout (link) >> maximum checkpoint duration + maximum restart duration or data loss may happen when Kafka expires an uncommitted transaction.
New in version 1.16.0.
Methods
builder
()Create a
KafkaSinkBuilder
to constructKafkaSink
.get_java_function
()get_transformer
()