IN- type of the records written to Kafka
@PublicEvolving public class KafkaSink<IN> extends Object implements Sink<IN,org.apache.flink.connector.kafka.sink.KafkaCommittable,org.apache.flink.connector.kafka.sink.KafkaWriterState,Void>
DeliveryGuarantee.NONEdoes not provide any guarantees: messages may be lost in case of issues on the Kafka broker and messages may be duplicated in case of a Flink failure.
DeliveryGuarantee.AT_LEAST_ONCEthe sink will wait for all outstanding records in the Kafka buffers to be acknowledged by the Kafka producer on a checkpoint. No messages will be lost in case of any issue with the Kafka brokers but messages may be duplicated when Flink restarts.
DeliveryGuarantee.EXACTLY_ONCE: In this mode the KafkaSink will write all messages in a Kafka transaction that will be committed to Kafka on a checkpoint. Thus, if the consumer reads only committed data (see Kafka consumer config isolation.level), no duplicates will be seen in case of a Flink restart. However, this delays record writing effectively until a checkpoint is written, so adjust the checkpoint duration accordingly. Please ensure that you use unique
transactionalIdPrefixs across your applications running on the same Kafka cluster such that multiple running jobs do not interfere in their transactions! Additionally, it is highly recommended to tweak Kafka transaction timeout (link) >> maximum checkpoint duration + maximum restart duration or data loss may happen when Kafka expires an uncommitted transaction.
|Modifier and Type||Method and Description|
Returns the serializer of the committable type.
Returns the serializer of the aggregated committable type.
Any stateful sink needs to provide this state serializer and implement
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public static <IN> KafkaSinkBuilder<IN> builder()
IN- type of incoming records
public SinkWriter<IN,org.apache.flink.connector.kafka.sink.KafkaCommittable,org.apache.flink.connector.kafka.sink.KafkaWriterState> createWriter(Sink.InitContext context, List<org.apache.flink.connector.kafka.sink.KafkaWriterState> states) throws IOException
SinkWriter. If the application is resumed from a checkpoint or savepoint and the sink is stateful, it will receive the corresponding state obtained with
SinkWriter.snapshotState(long)and serialized with
Sink.getWriterStateSerializer(). If no state exists, the first existing, compatible state specified in
Sink.getCompatibleStateNames()will be loaded and passed.
context- the runtime context.
states- the writer's previous state.
IOException- for any failure during creation.
public Optional<Committer<org.apache.flink.connector.kafka.sink.KafkaCommittable>> createCommitter() throws IOException
Committerwhich is part of a 2-phase-commit protocol. The
SinkWritercreates committables through
SinkWriter.prepareCommit(boolean)in the first phase. The committables are then passed to this committer and persisted with
Committer.commit(List). If a committer is returned, the sink must also return a
public Optional<GlobalCommitter<org.apache.flink.connector.kafka.sink.KafkaCommittable,Void>> createGlobalCommitter() throws IOException
GlobalCommitterwhich is part of a 2-phase-commit protocol. The
SinkWritercreates committables through
SinkWriter.prepareCommit(boolean)in the first phase. The committables are then passed to the Committer and persisted with
Committer.commit(List). The committables are also passed to this
GlobalCommitterof which only a single instance exists. If a global committer is returned, the sink must also return a
public Optional<SimpleVersionedSerializer<org.apache.flink.connector.kafka.sink.KafkaCommittable>> getCommittableSerializer()
public Optional<SimpleVersionedSerializer<Void>> getGlobalCommittableSerializer()
public Optional<SimpleVersionedSerializer<org.apache.flink.connector.kafka.sink.KafkaWriterState>> getWriterStateSerializer()
SinkWriter.snapshotState(long)properly. The respective state is used in
Sink.createWriter(InitContext, List)on recovery.
Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.