public class FlinkKafkaProducer010<T> extends StreamSink<T> implements SinkFunction<T>, RichFunction
Modifier and Type | Class and Description |
---|---|
static class |
FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T>
Configuration object returned by the writeToKafkaWithTimestamps() call.
|
AbstractStreamOperator.CountingOutput, AbstractStreamOperator.LatencyGauge
userFunction
chainingStrategy, config, latencyGauge, LOG, metrics, output
Constructor and Description |
---|
FlinkKafkaProducer010(String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
KafkaPartitioner<T> customPartitioner)
Create Kafka producer
This constructor does not allow writing timestamps to Kafka, it follow approach (a) (see above)
|
FlinkKafkaProducer010(String topicId,
SerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String topicId,
SerializationSchema<T> serializationSchema,
Properties producerConfig,
KafkaPartitioner<T> customPartitioner)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String brokerList,
String topicId,
KeyedSerializationSchema<T> serializationSchema)
Creates a FlinkKafkaProducer for a given topic.
|
FlinkKafkaProducer010(String brokerList,
String topicId,
SerializationSchema<T> serializationSchema)
Creates a FlinkKafkaProducer for a given topic.
|
Modifier and Type | Method and Description |
---|---|
IterationRuntimeContext |
getIterationRuntimeContext()
This method is used for approach (a) (see above)
|
void |
invoke(T value)
Invoke method for using the Sink as DataStream.addSink() sink.
|
void |
open(Configuration parameters)
This method is used for approach (a) (see above)
|
void |
processElement(StreamRecord<T> element)
Process method for using the sink with timestamp support.
|
void |
setFlushOnCheckpoint(boolean flush)
If set to true, the Flink producer will wait for all outstanding messages in the Kafka buffers
to be acknowledged by the Kafka producer on a checkpoint.
|
void |
setLogFailuresOnly(boolean logFailuresOnly)
Defines whether the producer should fail on errors, or only log them.
|
void |
setRuntimeContext(RuntimeContext t)
This method is used for approach (a) (see above)
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(DataStream<T> inStream,
String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(DataStream<T> inStream,
String topicId,
KeyedSerializationSchema<T> serializationSchema,
Properties producerConfig,
KafkaPartitioner<T> customPartitioner)
Creates a FlinkKafkaProducer for a given topic.
|
static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> |
writeToKafkaWithTimestamps(DataStream<T> inStream,
String topicId,
SerializationSchema<T> serializationSchema,
Properties producerConfig)
Creates a FlinkKafkaProducer for a given topic.
|
reportOrForwardLatencyMarker
close, dispose, getUserFunction, getUserFunctionParameters, initializeState, notifyOfCompletedCheckpoint, open, restoreState, setOutputType, setup, snapshotState, snapshotState
getChainingStrategy, getContainingTask, getCurrentKey, getExecutionConfig, getInternalTimerService, getKeyedStateBackend, getKeyedStateStore, getMetricGroup, getOperatorConfig, getOperatorName, getOperatorStateBackend, getPartitionedState, getPartitionedState, getProcessingTimeService, getRuntimeContext, getUserCodeClassloader, initializeState, numEventTimeTimers, numProcessingTimeTimers, processLatencyMarker, processLatencyMarker1, processLatencyMarker2, processWatermark, processWatermark1, processWatermark2, setChainingStrategy, setCurrentKey, setKeyContextElement1, setKeyContextElement2, snapshotLegacyOperatorState, snapshotState
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
close, getRuntimeContext
processLatencyMarker, processWatermark
close, dispose, getChainingStrategy, getMetricGroup, initializeState, notifyOfCompletedCheckpoint, open, setChainingStrategy, setKeyContextElement1, setKeyContextElement2, setup, snapshotLegacyOperatorState, snapshotState
public FlinkKafkaProducer010(String brokerList, String topicId, SerializationSchema<T> serializationSchema)
brokerList
- Comma separated addresses of the brokerstopicId
- ID of the Kafka topic.serializationSchema
- User defined (keyless) serialization schema.public FlinkKafkaProducer010(String topicId, SerializationSchema<T> serializationSchema, Properties producerConfig)
topicId
- ID of the Kafka topic.serializationSchema
- User defined (keyless) serialization schema.producerConfig
- Properties with the producer configuration.public FlinkKafkaProducer010(String topicId, SerializationSchema<T> serializationSchema, Properties producerConfig, KafkaPartitioner<T> customPartitioner)
topicId
- The topic to write data toserializationSchema
- A (keyless) serializable serialization schema for turning user objects into a kafka-consumable byte[]producerConfig
- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner
- A serializable partitioner for assigning messages to Kafka partitions (when passing null, we'll use Kafka's partitioner)public FlinkKafkaProducer010(String brokerList, String topicId, KeyedSerializationSchema<T> serializationSchema)
brokerList
- Comma separated addresses of the brokerstopicId
- ID of the Kafka topic.serializationSchema
- User defined serialization schema supporting key/value messagespublic FlinkKafkaProducer010(String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig)
topicId
- ID of the Kafka topic.serializationSchema
- User defined serialization schema supporting key/value messagesproducerConfig
- Properties with the producer configuration.public FlinkKafkaProducer010(String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, KafkaPartitioner<T> customPartitioner)
public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(DataStream<T> inStream, String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig)
inStream
- The stream to write to KafkatopicId
- ID of the Kafka topic.serializationSchema
- User defined serialization schema supporting key/value messagesproducerConfig
- Properties with the producer configuration.public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(DataStream<T> inStream, String topicId, SerializationSchema<T> serializationSchema, Properties producerConfig)
inStream
- The stream to write to KafkatopicId
- ID of the Kafka topic.serializationSchema
- User defined (keyless) serialization schema.producerConfig
- Properties with the producer configuration.public static <T> FlinkKafkaProducer010.FlinkKafkaProducer010Configuration<T> writeToKafkaWithTimestamps(DataStream<T> inStream, String topicId, KeyedSerializationSchema<T> serializationSchema, Properties producerConfig, KafkaPartitioner<T> customPartitioner)
inStream
- The stream to write to KafkatopicId
- The name of the target topicserializationSchema
- A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig
- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner
- A serializable partitioner for assigning messages to Kafka partitions.public void setLogFailuresOnly(boolean logFailuresOnly)
logFailuresOnly
- The flag to indicate logging-only on exceptions.public void setFlushOnCheckpoint(boolean flush)
flush
- Flag indicating the flushing mode (true = flush on checkpoint)public void open(Configuration parameters) throws Exception
open
in interface RichFunction
parameters
- The configuration containing the parameters attached to the contract.Exception
- Implementations may forward exceptions, which are caught by the runtime. When the
runtime catches an exception, it aborts the task and lets the fail-over logic
decide whether to retry the task execution.Configuration
public IterationRuntimeContext getIterationRuntimeContext()
getIterationRuntimeContext
in interface RichFunction
public void setRuntimeContext(RuntimeContext t)
setRuntimeContext
in interface RichFunction
t
- The runtime context.public void invoke(T value) throws Exception
invoke
in interface SinkFunction<T>
value
- The input record.Exception
public void processElement(StreamRecord<T> element) throws Exception
processElement
in interface OneInputStreamOperator<T,Object>
processElement
in class StreamSink<T>
Exception
Copyright © 2014–2017 The Apache Software Foundation. All rights reserved.