This documentation is for an out-of-date version of Apache Flink. We recommend you use the latest stable version.
Flink’s Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Parquet, or ORC.
This page describes how to declare built-in table sources and/or table sinks and register them in Flink. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
The following tables list all available connectors and formats. Their mutual compatibility is tagged in the corresponding sections for table connectors and table formats. The following tables provide dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
This allows not only for better unification of APIs and SQL Client but also for better extensibility in case of custom implementations without changing the actual declaration.
Every declaration is similar to a SQL CREATE TABLE statement. One can define the name of the table, the schema of the table, a connector, and a data format upfront for connecting to an external system.
The connector describes the external system that stores the data of a table. Storage systems such as Apacha Kafka or a regular file system can be declared here. The connector might already provide a fixed format with fields and schema.
Some systems support different data formats. For example, a table that is stored in Kafka or in files can encode its rows with CSV, JSON, or Avro. A database connector might need the table schema here. Whether or not a storage system requires the definition of a format, is documented for every connector. Different systems also require different types of formats (e.g., column-oriented formats vs. row-oriented formats). The documentation states which format types and connectors are compatible.
The table schema defines the schema of a table that is exposed to SQL queries. It describes how a source maps the data format to the table schema and a sink vice versa. The schema has access to fields defined by the connector or format. It can use one or more fields for extracting or inserting time attributes. If input fields have no deterministic field order, the schema clearly defines column names, their order, and origin.
The subsequent sections will cover each definition part (connector, format, and schema) in more detail. The following example shows how to pass them:
The table’s type (source, sink, or both) determines how a table is registered. In case of table type both, both a table source and table sink are registered under the same name. Logically, this means that we can both read and write to such a table similarly to a table in a regular DBMS.
For streaming queries, an update mode declares how to communicate between a dynamic table and the storage system for continuous queries.
The following code shows a full example of how to connect to Kafka for reading Avro records.
In both ways the desired connection properties are converted into normalized, string-based key-value pairs. So-called table factories create configured table sources, table sinks, and corresponding formats from the key-value pairs. All table factories that can be found via Java’s Service Provider Interfaces (SPI) are taken into account when searching for exactly-one matching table factory.
If no factory can be found or multiple factories match for the given properties, an exception will be thrown with additional information about considered factories and supported properties.
The table schema defines the names and types of columns similar to the column definitions of a SQL CREATE TABLE statement. In addition, one can specify how columns are mapped from and to fields of the format in which the table data is encoded. The origin of a field might be important if the name of the column should differ from the input/output format. For instance, a column user_name should reference the field $$-user-name from a JSON format. Additionally, the schema is needed to map types from an external system to Flink’s representation. In case of a table sink, it ensures that only data with valid schema is written to an external system.
The following example shows a simple schema without time attributes and one-to-one field mapping of input/output to table columns.
For each field, the following properties can be declared in addition to the column’s name and type:
Time attributes are essential when working with unbounded streaming tables. Therefore both processing-time and event-time (also known as “rowtime”) attributes can be defined as part of the schema.
For more information about time handling in Flink and especially event-time, we recommend the general event-time section.
Rowtime Attributes
In order to control the event-time behavior for tables, Flink provides predefined timestamp extractors and watermark strategies.
The following timestamp extractors are supported:
The following watermark strategies are supported:
Make sure to always declare both timestamps and watermarks. Watermarks are required for triggering time-based operations.
Type Strings
Because DataType is only available in a programming language, type strings are supported for being defined in a YAML file.
The type strings are the same to type declaration in SQL, please see the Data Types page about how to declare a type in SQL.
Append Mode: In append mode, a dynamic table and an external connector only exchange INSERT messages.
Retract Mode: In retract mode, a dynamic table and an external connector exchange ADD and RETRACT messages. An INSERT change is encoded as an ADD message, a DELETE change as a RETRACT message, and an UPDATE change as a RETRACT message for the updated (previous) row and an ADD message for the updating (new) row. In this mode, a key must not be defined as opposed to upsert mode. However, every update consists of two messages which is less efficient.
Upsert Mode: In upsert mode, a dynamic table and an external connector exchange UPSERT and DELETE messages. This mode requires a (possibly composite) unique key by which updates can be propagated. The external connector needs to be aware of the unique key attribute in order to apply messages correctly. INSERT and UPDATE changes are encoded as UPSERT messages. DELETE changes as DELETE messages. The main difference to a retract stream is that UPDATE changes are encoded with a single message and are therefore more efficient.
Attention The documentation of each connector states which update modes are supported.
Flink provides a set of connectors for connecting to external systems.
Please note that not all connectors are available in both batch and streaming yet. Furthermore, not every streaming connector supports every streaming mode. Therefore, each connector is tagged accordingly. A format tag indicates that the connector requires a certain type of format.
The file system connector allows for reading and writing from a local or distributed filesystem. A filesystem can be defined as:
The file system connector itself is included in Flink and does not require an additional dependency. A corresponding format needs to be specified for reading and writing rows from and to a file system.
Attention File system sources and sinks for streaming are only experimental. In the future, we will support actual streaming use cases, i.e., directory monitoring and bucket output.
The Kafka connector allows for reading and writing from and to an Apache Kafka topic. It can be defined as follows:
Specify the start reading position: By default, the Kafka source will start reading data from the committed group offsets in Zookeeper or Kafka brokers. You can specify other start positions, which correspond to the configurations in section Kafka Consumers Start Position Configuration.
Flink-Kafka Sink Partitioning: By default, a Kafka sink writes to at most as many partitions as its own parallelism (each parallel instance of the sink writes to exactly one partition). In order to distribute the writes to more partitions or control the routing of rows into partitions, a custom sink partitioner can be provided. The round-robin partitioner is useful to avoid an unbalanced partitioning. However, it will cause a lot of network connections between all the Flink instances and all the Kafka brokers.
Consistency guarantees: By default, a Kafka sink ingests data with at-least-once guarantees into a Kafka topic if the query is executed with checkpointing enabled.
Kafka 0.10+ Timestamps: Since Kafka 0.10, Kafka messages have a timestamp as metadata that specifies when the record was written into the Kafka topic. These timestamps can be used for a rowtime attribute by selecting timestamps: from-source in YAML and timestampsFromSource() in Java/Scala respectively.
Kafka 0.11+ Versioning: Since Flink 1.7, the Kafka connector definition should be independent of a hard-coded Kafka version. Use the connector version universal as a wildcard for Flink’s Kafka connector that is compatible with all Kafka versions starting from 0.11.
Make sure to add the version-specific Kafka dependency. In addition, a corresponding format needs to be specified for reading and writing rows from and to Kafka.
For append-only queries, the connector can also operate in append mode for exchanging only INSERT messages with the external system. If no key is defined by the query, a key is automatically generated by Elasticsearch.
Disabling flushing on checkpoint: When disabled, a sink will not wait for all pending action requests to be acknowledged by Elasticsearch on checkpoints. Thus, a sink does NOT provide any strong guarantees for at-least-once delivery of action requests.
Key extraction: Flink automatically extracts valid keys from a query. For example, a query SELECT a, b, c FROM t GROUP BY a, b defines a composite key of the fields a and b. The Elasticsearch connector generates a document ID string for every row by concatenating all key fields in the order defined in the query using a key delimiter. A custom representation of null literals for key fields can be defined.
Attention A JSON format defines how to encode documents for the external system, therefore, it must be added as a dependency.
For append-only queries, the connector can also operate in append mode for exchanging only INSERT messages with the external system.
The connector can be defined as follows:
Columns: All the column families in HBase table must be declared as ROW type, the field name maps to the column family name, and the nested field names map to the column qualifier names. There is no need to declare all the families and qualifiers in the schema, users can declare what’s necessary. Except the ROW type fields, the only one field of atomic type (e.g. STRING, BIGINT) will be recognized as row key of the table. There’s no constraints on the name of row key field.
Temporary join: Lookup join against HBase do not use any caching; data is always queired directly through the HBase client.
Upsert sink: Flink automatically extracts valid keys from a query. For example, a query SELECT a, b, c FROM t GROUP BY a, b defines a composite key of the fields a and b. If a JDBC table is used as upsert sink, please make sure keys of the query is one of the unique key sets or primary key of the underlying database. This can guarantee the output result is as expected.
Temporary Join: JDBC connector can be used in temporal join as a lookup source. Currently, only sync lookup mode is supported. The lookup cache options (connector.lookup.cache.max-rows and connector.lookup.cache.ttl) must all be specified if any of them is specified. The lookup cache is used to improve performance of temporal join JDBC connector by querying the cache first instead of send all requests to remote database. But the returned value might not be the latest if it is from the cache. So it’s a balance between throughput and correctness.
Writing: As default, the connector.write.flush.interval is 0s and connector.write.flush.max-rows is 5000, which means for low traffic queries, the buffered output rows may not be flushed to database for a long time. So the interval configuration is recommended to set.
The CSV format aims to comply with RFC-4180 (“Common Format and
MIME Type for Comma-Separated Values (CSV) Files”) proposed by the Internet Engineering Task Force (IETF).
The format allows to read and write CSV data that corresponds to a given format schema. The format schema can be
defined either as a Flink type or derived from the desired table schema. Since Flink 1.10, the format will derive
format schema from table schema by default. Therefore, it is no longer necessary to explicitly declare the format schema.
If the format schema is equal to the table schema, the schema can also be automatically derived. This allows for
defining schema information only once. The names, types, and fields’ order of the format are determined by the
table’s schema. Time attributes are ignored if their origin is not a field. A from definition in the table
schema is interpreted as a field renaming in the format.
The CSV format can be used as follows:
The following table lists supported types that can be read and written:
Supported Flink SQL Types
ROW
VARCHAR
ARRAY[_]
INT
BIGINT
FLOAT
DOUBLE
BOOLEAN
DATE
TIME
TIMESTAMP
DECIMAL
NULL (unsupported yet)
Numeric types: Value should be a number but the literal "null" can also be understood. An empty string is
considered null. Values are also trimmed (leading/trailing white space). Numbers are parsed using
Java’s valueOf semantics. Other non-numeric strings may cause a parsing exception.
String and time types: Value is not trimmed. The literal "null" can also be understood. Time types
must be formatted according to the Java SQL time format with millisecond precision. For example:
2018-01-01 for date, 20:43:59 for time, and 2018-01-01 20:43:59.999 for timestamp.
Boolean type: Value is expected to be a boolean ("true", "false") string or "null". Empty strings are
interpreted as false. Values are trimmed (leading/trailing white space). Other values result in an exception.
Nested types: Array and row types are supported for one level of nesting using the array element delimiter.
Primitive byte arrays: Primitive byte arrays are handled in Base64-encoded representation.
Line endings: Line endings need to be considered even for row-based connectors (such as Kafka)
to be ignored for unquoted string fields at the end of a row.
Escaping and quoting: The following table shows examples of how escaping and quoting affect the parsing
of a string using * for escaping and ' for quoting:
The JSON format allows to read and write JSON data that corresponds to a given format schema. The format schema can be defined either as a Flink type, as a JSON schema, or derived from the desired table schema. A Flink type enables a more SQL-like definition and mapping to the corresponding SQL data types. The JSON schema allows for more complex and nested structures.
If the format schema is equal to the table schema, the schema can also be automatically derived. This allows for defining schema information only once. The names, types, and fields’ order of the format are determined by the table’s schema. Time attributes are ignored if their origin is not a field. A from definition in the table schema is interpreted as a field renaming in the format.
The JSON format can be used as follows:
The following table shows the mapping of JSON schema types to Flink SQL types:
JSON schema
Flink SQL
object
ROW
boolean
BOOLEAN
array
ARRAY[_]
number
DECIMAL
integer
DECIMAL
string
STRING
string with format: date-time
TIMESTAMP
string with format: date
DATE
string with format: time
TIME
string with encoding: base64
ARRAY[TINYINT]
null
NULL (unsupported yet)
Currently, Flink supports only a subset of the JSON schema specificationdraft-07. Union types (as well as allOf, anyOf, not) are not supported yet. oneOf and arrays of types are only supported for specifying nullability.
Simple references that link to a common definition in the document are supported as shown in the more complex example below:
Missing Field Handling: By default, a missing JSON field is set to null. You can enable strict JSON parsing that will cancel the source (and query) if a field is missing.
The Apache Avro format allows to read and write Avro data that corresponds to a given format schema. The format schema can be defined either as a fully qualified class name of an Avro specific record or as an Avro schema string. If a class name is used, the class must be available in the classpath during runtime.
The Avro format can be used as follows:
Avro types are mapped to the corresponding SQL data types. Union types are only supported for specifying nullability otherwise they are converted to an ANY type. The following table shows the mapping:
Avro schema
Flink SQL
record
ROW
enum
VARCHAR
array
ARRAY[_]
map
MAP[VARCHAR, _]
union
non-null type or ANY
fixed
ARRAY[TINYINT]
string
VARCHAR
bytes
ARRAY[TINYINT]
int
INT
long
BIGINT
float
FLOAT
double
DOUBLE
boolean
BOOLEAN
int with logicalType: date
DATE
int with logicalType: time-millis
TIME
int with logicalType: time-micros
INT
long with logicalType: timestamp-millis
TIMESTAMP
long with logicalType: timestamp-micros
BIGINT
bytes with logicalType: decimal
DECIMAL
fixed with logicalType: decimal
DECIMAL
null
NULL (unsupported yet)
Avro uses Joda-Time for representing logical date and time types in specific record classes. The Joda-Time dependency is not part of Flink’s distribution. Therefore, make sure that Joda-Time is in your classpath together with your specific record class during runtime. Avro formats specified via a schema string do not require Joda-Time to be present.
Make sure to add the Apache Avro dependency.
Old CSV Format
Attention For prototyping purposes only!
The old CSV format allows to read and write comma-separated rows using the filesystem connector.
This format describes Flink’s non-standard CSV table source/sink. In the future, the format will be
replaced by a proper RFC-compliant version. Use the RFC-compliant CSV format when writing to Kafka.
Use the old one for stream/batch filesystem operations for now.
The old CSV format is included in Flink and does not require additional dependencies.
Attention The old CSV format for writing rows is limited at the moment. Only a custom field delimiter is supported as optional parameter.
The following table sources and sinks have not yet been migrated (or have not been migrated entirely) to the new unified interfaces.
These are the additional TableSources which are provided with Flink:
Class name
Maven dependency
Batch?
Streaming?
Description
OrcTableSource
flink-orc
Y
N
A TableSource for ORC files.
These are the additional TableSinks which are provided with Flink:
Class name
Maven dependency
Batch?
Streaming?
Description
CsvTableSink
flink-table
Y
Append
A simple sink for CSV files.
JDBCAppendTableSink
flink-jdbc
Y
Append
Writes a Table to a JDBC table.
CassandraAppendTableSink
flink-connector-cassandra
N
Append
Writes a Table to a Cassandra table.
OrcTableSource
The OrcTableSource reads ORC files. ORC is a file format for structured data and stores the data in a compressed, columnar representation. ORC is very storage efficient and supports projection and filter push-down.
An OrcTableSource is created as shown below:
Note: The OrcTableSource does not support ORC’s Union type yet.
The CsvTableSink emits a Table to one or more CSV files.
The sink only supports append-only streaming tables. It cannot be used to emit a Table that is continuously updated. See the documentation on Table to Stream conversions for details. When emitting a streaming table, rows are written at least once (if checkpointing is enabled) and the CsvTableSink does not split output files into bucket files but continuously writes to the same files.
JDBCAppendTableSink
The JDBCAppendTableSink emits a Table to a JDBC connection. The sink only supports append-only streaming tables. It cannot be used to emit a Table that is continuously updated. See the documentation on Table to Stream conversions for details.
The JDBCAppendTableSink inserts each Table row at least once into the database table (if checkpointing is enabled). However, you can specify the insertion query using REPLACE or INSERT OVERWRITE to perform upsert writes to the database.
To use the JDBC sink, you have to add the JDBC connector dependency (flink-jdbc) to your project. Then you can create the sink using JDBCAppendSinkBuilder:
Similar to using JDBCOutputFormat, you have to explicitly specify the name of the JDBC driver, the JDBC URL, the query to be executed, and the field types of the JDBC table.
The CassandraAppendTableSink emits a Table to a Cassandra table. The sink only supports append-only streaming tables. It cannot be used to emit a Table that is continuously updated. See the documentation on Table to Stream conversions for details.
The CassandraAppendTableSink inserts all rows at least once into the Cassandra table if checkpointing is enabled. However, you can specify the query as upsert query.
To use the CassandraAppendTableSink, you have to add the Cassandra connector dependency (flink-connector-cassandra) to your project. The example below shows how to use the CassandraAppendTableSink.