T
- The type of records produced by this data sourcepublic abstract class FlinkKafkaConsumerBase<T> extends RichParallelSourceFunction<T> implements CheckpointListener, CheckpointedAsynchronously<HashMap<KafkaTopicPartition,Long>>, ResultTypeQueryable<T>
The Kafka version specific behavior is defined mainly in the specific subclasses of the
AbstractFetcher
.
SourceFunction.SourceContext<T>
Modifier and Type | Field and Description |
---|---|
protected List<KafkaTopicPartition> |
allSubscribedPartitions
The set of topic partitions that the source will read
|
protected KeyedDeserializationSchema<T> |
deserializer
The schema to convert between Kafka's byte messages, and Flink's objects
|
static String |
KEY_DISABLE_METRICS
Boolean configuration key to disable metrics tracking
|
protected static org.slf4j.Logger |
LOG |
static int |
MAX_NUM_PENDING_CHECKPOINTS
The maximum number of pending non-committed checkpoints to track, to avoid memory leaks
|
Constructor and Description |
---|
FlinkKafkaConsumerBase(KeyedDeserializationSchema<T> deserializer)
Base constructor.
|
Modifier and Type | Method and Description |
---|---|
protected static List<KafkaTopicPartition> |
assignPartitions(Map<KafkaTopicPartition,Long> restoredPartitionOffsets,
List<KafkaTopicPartition> completeKafkaPartitionsList,
int numConsumers,
int consumerIndex)
Determines which partitions the consumer should subscribe to.
|
FlinkKafkaConsumerBase<T> |
assignTimestampsAndWatermarks(AssignerWithPeriodicWatermarks<T> assigner)
Specifies an
AssignerWithPunctuatedWatermarks to emit watermarks in a punctuated manner. |
FlinkKafkaConsumerBase<T> |
assignTimestampsAndWatermarks(AssignerWithPunctuatedWatermarks<T> assigner)
Specifies an
AssignerWithPunctuatedWatermarks to emit watermarks in a punctuated manner. |
void |
cancel()
Cancels the source.
|
void |
close()
Tear-down method for the user code.
|
protected abstract AbstractFetcher<T,?> |
createFetcher(SourceFunction.SourceContext<T> sourceContext,
List<KafkaTopicPartition> thisSubtaskPartitions,
SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic,
SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated,
StreamingRuntimeContext runtimeContext)
Creates the fetcher that connect to the Kafka brokers, pulls data, deserialized the
data, and emits it into the data streams.
|
TypeInformation<T> |
getProducedType()
Gets the data type (as a
TypeInformation ) produced by this function or input format. |
protected static void |
logPartitionInfo(org.slf4j.Logger logger,
List<KafkaTopicPartition> partitionInfos)
Logs the partition information in INFO level.
|
void |
notifyCheckpointComplete(long checkpointId)
This method is called as a notification once a distributed checkpoint has been completed.
|
void |
restoreState(HashMap<KafkaTopicPartition,Long> restoredOffsets)
Restores the state of the function or operator to that of a previous checkpoint.
|
void |
run(SourceFunction.SourceContext<T> sourceContext)
Starts the source.
|
protected void |
setSubscribedPartitions(List<KafkaTopicPartition> allSubscribedPartitions)
This method must be called from the subclasses, to set the list of all subscribed partitions
that this consumer will fetch from (across all subtasks).
|
HashMap<KafkaTopicPartition,Long> |
snapshotState(long checkpointId,
long checkpointTimestamp)
Gets the current state of the function of operator.
|
getIterationRuntimeContext, getRuntimeContext, open, setRuntimeContext
protected static final org.slf4j.Logger LOG
public static final int MAX_NUM_PENDING_CHECKPOINTS
public static final String KEY_DISABLE_METRICS
protected final KeyedDeserializationSchema<T> deserializer
protected List<KafkaTopicPartition> allSubscribedPartitions
public FlinkKafkaConsumerBase(KeyedDeserializationSchema<T> deserializer)
deserializer
- The deserializer to turn raw byte messages into Java/Scala objects.protected void setSubscribedPartitions(List<KafkaTopicPartition> allSubscribedPartitions)
allSubscribedPartitions
- The list of all partitions that all subtasks together should fetch from.public FlinkKafkaConsumerBase<T> assignTimestampsAndWatermarks(AssignerWithPunctuatedWatermarks<T> assigner)
AssignerWithPunctuatedWatermarks
to emit watermarks in a punctuated manner.
The watermark extractor will run per Kafka partition, watermarks will be merged across partitions
in the same way as in the Flink runtime, when streams are merged.
When a subtask of a FlinkKafkaConsumer source reads multiple Kafka partitions, the streams from the partitions are unioned in a "first come first serve" fashion. Per-partition characteristics are usually lost that way. For example, if the timestamps are strictly ascending per Kafka partition, they will not be strictly ascending in the resulting Flink DataStream, if the parallel source subtask reads more that one partition.
Running timestamp extractors / watermark generators directly inside the Kafka source, per Kafka partition, allows users to let them exploit the per-partition characteristics.
Note: One can use either an AssignerWithPunctuatedWatermarks
or an
AssignerWithPeriodicWatermarks
, not both at the same time.
assigner
- The timestamp assigner / watermark generator to use.public FlinkKafkaConsumerBase<T> assignTimestampsAndWatermarks(AssignerWithPeriodicWatermarks<T> assigner)
AssignerWithPunctuatedWatermarks
to emit watermarks in a punctuated manner.
The watermark extractor will run per Kafka partition, watermarks will be merged across partitions
in the same way as in the Flink runtime, when streams are merged.
When a subtask of a FlinkKafkaConsumer source reads multiple Kafka partitions, the streams from the partitions are unioned in a "first come first serve" fashion. Per-partition characteristics are usually lost that way. For example, if the timestamps are strictly ascending per Kafka partition, they will not be strictly ascending in the resulting Flink DataStream, if the parallel source subtask reads more that one partition.
Running timestamp extractors / watermark generators directly inside the Kafka source, per Kafka partition, allows users to let them exploit the per-partition characteristics.
Note: One can use either an AssignerWithPunctuatedWatermarks
or an
AssignerWithPeriodicWatermarks
, not both at the same time.
assigner
- The timestamp assigner / watermark generator to use.public void run(SourceFunction.SourceContext<T> sourceContext) throws Exception
SourceFunction
SourceFunction.SourceContext
emit
elements.
Sources that implement Checkpointed
must lock on the checkpoint lock (using a synchronized block) before updating internal
state and emitting elements, to make both an atomic operation:
public class ExampleSource<T> implements SourceFunction<T>, Checkpointed<Long> {
private long count = 0L;
private volatile boolean isRunning = true;
public void run(SourceContext<T> ctx) {
while (isRunning && count < 1000) {
synchronized (ctx.getCheckpointLock()) {
ctx.collect(count);
count++;
}
}
}
public void cancel() {
isRunning = false;
}
public Long snapshotState(long checkpointId, long checkpointTimestamp) { return count; }
public void restoreState(Long state) { this.count = state; }
}
run
in interface SourceFunction<T>
sourceContext
- The context to emit elements to and for accessing locks.Exception
public void cancel()
SourceFunction
SourceFunction.run(SourceContext)
method. The implementation needs to ensure that the
source will break out of that loop after this method is called.
A typical pattern is to have an "volatile boolean isRunning"
flag that is set to
false
in this method. That flag is checked in the loop condition.
When a source is canceled, the executing thread will also be interrupted
(via Thread.interrupt()
). The interruption happens strictly after this
method has been called, so any interruption handler can rely on the fact that
this method has completed. It is good practice to make any flags altered by
this method "volatile", in order to guarantee the visibility of the effects of
this method to any interruption handler.
cancel
in interface SourceFunction<T>
public void close() throws Exception
RichFunction
This method can be used for clean up work.
close
in interface RichFunction
close
in class AbstractRichFunction
Exception
- Implementations may forward exceptions, which are caught by the runtime. When the
runtime catches an exception, it aborts the task and lets the fail-over logic
decide whether to retry the task execution.public HashMap<KafkaTopicPartition,Long> snapshotState(long checkpointId, long checkpointTimestamp) throws Exception
Checkpointed
snapshotState
in interface Checkpointed<HashMap<KafkaTopicPartition,Long>>
checkpointId
- The ID of the checkpoint.checkpointTimestamp
- The timestamp of the checkpoint, as derived by
System.currentTimeMillis() on the JobManager.Exception
- Thrown if the creation of the state object failed. This causes the
checkpoint to fail. The system may decide to fail the operation (and trigger
recovery), or to discard this checkpoint attempt and to continue running
and to try again with the next checkpoint attempt.public void restoreState(HashMap<KafkaTopicPartition,Long> restoredOffsets)
Checkpointed
restoreState
in interface Checkpointed<HashMap<KafkaTopicPartition,Long>>
restoredOffsets
- The state to be restored.public void notifyCheckpointComplete(long checkpointId) throws Exception
CheckpointListener
notifyCheckpointComplete
in interface CheckpointListener
checkpointId
- The ID of the checkpoint that has been completed.Exception
protected abstract AbstractFetcher<T,?> createFetcher(SourceFunction.SourceContext<T> sourceContext, List<KafkaTopicPartition> thisSubtaskPartitions, SerializedValue<AssignerWithPeriodicWatermarks<T>> watermarksPeriodic, SerializedValue<AssignerWithPunctuatedWatermarks<T>> watermarksPunctuated, StreamingRuntimeContext runtimeContext) throws Exception
sourceContext
- The source context to emit data to.thisSubtaskPartitions
- The set of partitions that this subtask should handle.watermarksPeriodic
- Optional, a serialized timestamp extractor / periodic watermark generator.watermarksPunctuated
- Optional, a serialized timestamp extractor / punctuated watermark generator.runtimeContext
- The task's runtime context.Exception
- The method should forward exceptionspublic TypeInformation<T> getProducedType()
ResultTypeQueryable
TypeInformation
) produced by this function or input format.getProducedType
in interface ResultTypeQueryable<T>
protected static List<KafkaTopicPartition> assignPartitions(Map<KafkaTopicPartition,Long> restoredPartitionOffsets, List<KafkaTopicPartition> completeKafkaPartitionsList, int numConsumers, int consumerIndex)
restoredPartitionOffsets
- The restored partition offsets, if anycompleteKafkaPartitionsList
- The complete list of kafka partitionsnumConsumers
- The number of consumersconsumerIndex
- The index of the specific consumerprotected static void logPartitionInfo(org.slf4j.Logger logger, List<KafkaTopicPartition> partitionInfos)
logger
- The logger to log to.partitionInfos
- List of subscribed partitionsCopyright © 2014–2017 The Apache Software Foundation. All rights reserved.