pyflink.datastream.connectors.kafka.FlinkKafkaProducer#
- class FlinkKafkaProducer(topic: str, serialization_schema: pyflink.common.serialization.SerializationSchema, producer_config: Dict, kafka_producer_pool_size: int = 5, semantic=Semantic.AT_LEAST_ONCE)[source]#
Flink Sink to produce data into a Kafka topic. By default producer will use AT_LEAST_ONCE semantic. Before using EXACTLY_ONCE please refer to Flinkās Kafka connector documentation.
Methods
get_java_function
()ignore_failures_after_transaction_timeout
()Disables the propagation of exceptions thrown when committing presumably timed out Kafka transactions during recovery of the job.
set_flush_on_checkpoint
(flush_on_checkpoint)If set to true, the Flink producer will wait for all outstanding messages in the Kafka buffers to be acknowledged by the Kafka producer on a checkpoint.
set_log_failures_only
(log_failures_only)Defines whether the producer should fail on errors, or only log them.
set_write_timestamp_to_kafka
(...)If set to true, Flink will write the (event time) timestamp attached to each record into Kafka.