Connectors#
File System#
File Source#
Factory for FileEnumerator which task is to discover all files to be read and to split them into a set of file source splits. |
|
|
Factory for FileSplitAssigner which is responsible for deciding what split should be processed next by which node. |
|
A reader format that reads individual records from a stream. |
|
The BulkFormat reads and decodes batches of records at a time. |
|
The builder for the |
|
A unified data source that reads files - both in batch and in streaming mode. |
File Sink#
|
A BucketAssigner is used with a file sink to determine the bucket each incoming element should be put into. |
|
The policy based on which a Bucket in the FileSink rolls its currently open part file and opens a new one. |
|
The default implementation of the RollingPolicy. |
|
A RollingPolicy which rolls (ONLY) on every checkpoint. |
|
Part file name configuration. |
|
Strategy for compacting the files written in {@link FileSink} before committing. |
|
The FileCompactor is responsible for compacting files into one file. |
|
A unified sink that emits its input elements to FileSystem files within buckets. |
|
Sink that emits its input elements to FileSystem files within buckets. |
Number Sequence#
|
A data source that produces a sequence of numbers (longs). |
Kafka#
Kakfa Producer and Consumer#
|
The Flink Kafka Consumer is a streaming data source that pulls a parallel data stream from Apache Kafka. |
|
Flink Sink to produce data into a Kafka topic. |
|
Semantics that can be chosen. |
Kafka Source and Sink#
|
The Source implementation of Kafka. |
The builder class for |
|
|
Corresponding to Java |
|
Corresponding to Java |
|
An interface for users to specify the starting / stopping offset of a KafkaPartitionSplit. |
|
Flink Sink to produce data into a Kafka topic. |
Builder to construct |
|
|
A serialization schema which defines how to convert the stream record to kafka producer record. |
Builder to construct |
|
Select topic for an incoming record |
kinesis#
Kinesis Source#
|
Utility to map Kinesis shards to Flink subtask indices. |
This is a deserialization schema specific for the Flink Kinesis Consumer. |
|
|
The watermark tracker is responsible for aggregating watermarks across distributed operators. |
|
The Flink Kinesis Consumer is an exactly-once parallel streaming data source that subscribes to multiple AWS Kinesis streams within the same AWS service region, and can handle resharding of streams. |
Kinesis Sink#
|
This is a generator convert from an input element to the partition key, a string. |
|
A Kinesis Data Streams (KDS) Sink that performs async requests against a destination stream using the buffering protocol. |
Builder to construct KinesisStreamsSink. |
|
|
A Kinesis Data Firehose (KDF) Sink that performs async requests against a destination delivery stream using the buffering protocol. |
Builder to construct KinesisFirehoseSink. |
Pulsar#
Pulsar Source#
|
A factory class for users to specify the start position of a pulsar subscription. |
|
A factory class for users to specify the stop position of a pulsar subscription. |
|
A generator for generating the TopicRange for given topic. |
|
The Source implementation of Pulsar. |
The builder class for PulsarSource to make it easier for the users to construct a PulsarSource. |
Pulsar Sink#
|
The routing policy for choosing the desired topic by the given message. |
|
A delayer for Pulsar broker passing the sent message to the downstream consumer. |
|
The Sink implementation of Pulsar. |
The builder class for PulsarSink to make it easier for the users to construct a PulsarSink. |
Jdbc#
|
|
|
JDBC connection options. |
|
JDBC sink batch options. |
RMQ#
|
Connection Configuration for RMQ. |
|
|
|
Elasticsearch#
|
Used to control whether the sink should retry failed requests at all or with which kind back off strategy. |
|
Emitter which is used by sinks to prepare elements for sending them to Elasticsearch. |
Builder to construct an Elasticsearch 6 compatible ElasticsearchSink. |
|
Builder to construct an Elasticsearch 7 compatible ElasticsearchSink. |
|
|
Flink Sink to insert or update data in an Elasticsearch index. |
Cassandra#
|
Sets the ClusterBuilder for this sink. |
|
The consistency level |
This class is used to configure a Mapper after deployment. |
|
|
This class is used to configure a Cluster after deployment. |
|
CheckpointCommitter that saves information about completed checkpoints within a separate table in a cassandra database. |
Handle a failed Throwable. |