Sink: Batch Sink: Streaming Append & Upsert Mode
The Elasticsearch connector allows for writing into an index of the Elasticsearch engine. This document describes how to setup the Elasticsearch Connector to run SQL queries against Elasticsearch.
The connector can operate in upsert mode for exchanging UPDATE/DELETE messages with the external system using the primary key defined on the DDL.
If no primary key is defined on the DDL, the connector can only operate in append mode for exchanging INSERT only messages with external system.
In order to setup the Elasticsearch connector, the following table provides dependency information for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.
Elasticsearch Version | Maven dependency | SQL Client JAR |
---|---|---|
6.x | flink-connector-elasticsearch6_2.11 |
Download |
7.x and later versions | flink-connector-elasticsearch7_2.11 |
Download |
Attention Elasticsearch connector works with JSON format which defines how to encode documents for the external system, therefore, it must be added as a dependency.
The example below shows how to create an Elasticsearch sink table:
Option | Required | Default | Type | Description |
---|---|---|---|---|
connector |
required | (none) | String | Specify what connector to use, valid values are:
|
hosts |
required | (none) | String | One or more Elasticsearch hosts to connect to, e.g. 'http://host_name:9092;http://host_name:9093' . |
index |
required | (none) | String | Elasticsearch index for every record. Can be a static index (e.g. 'myIndex' ) or
a dynamic index (e.g. 'index-{log_ts|yyyy-MM-dd}' ).
See the following Dynamic Index section for more details. |
document-type |
required in 6.x | (none) | String | Elasticsearch document type. Not necessary anymore in elasticsearch-7 . |
document-id.key-delimiter |
optional | _ | String | Delimiter for composite keys ("_" by default), e.g., "$" would result in IDs "KEY1$KEY2$KEY3"." |
failure-handler |
optional | fail | String | Failure handling strategy in case a request to Elasticsearch fails. Valid strategies are:
|
sink.flush-on-checkpoint |
optional | true | Boolean | Flush on checkpoint or not. When disabled, a sink will not wait for all pending action requests to be acknowledged by Elasticsearch on checkpoints. Thus, a sink does NOT provide any strong guarantees for at-least-once delivery of action requests. |
sink.bulk-flush.max-actions |
optional | 1000 | Integer | Maximum number of buffered actions per bulk request.
Can be set to '0' to disable it.
|
sink.bulk-flush.max-size |
optional | 2mb | MemorySize | Maximum size in memory of buffered actions per bulk request. Must be in MB granularity.
Can be set to '0' to disable it.
|
sink.bulk-flush.interval |
optional | 1s | Duration | The interval to flush buffered actions.
Can be set to '0' to disable it. Note, both 'sink.bulk-flush.max-size' and 'sink.bulk-flush.max-actions'
can be set to '0' with the flush interval set allowing for complete async processing of buffered actions.
|
sink.bulk-flush.backoff.strategy |
optional | DISABLED | String | Specify how to perform retries if any flush actions failed due to a temporary request error. Valid strategies are:
|
sink.bulk-flush.backoff.max-retries |
optional | 8 | Integer | Maximum number of backoff retries. |
sink.bulk-flush.backoff.delay |
optional | 50ms | Duration | Delay between each backoff attempt. For CONSTANT backoff, this is simply the delay between each retry. For EXPONENTIAL backoff, this is the initial base delay. |
connection.max-retry-timeout |
optional | (none) | Duration | Maximum timeout between retries. |
connection.path-prefix |
optional | (none) | String | Prefix string to be added to every REST communication, e.g., '/v1' |
format |
optional | json | String | Elasticsearch connector supports to specify a format. The format must produce a valid json document.
By default uses built-in 'json' format. Please refer to JSON Format page for more details.
|
Elasticsearch sink can work in either upsert mode or append mode, it depends on whether primary key is defined. If primary key is defined, Elasticsearch sink works in upsert mode which can consume queries containing UPDATE/DELETE messages. If primary key is not defined, Elasticsearch sink works in append mode which can only consume queries containing INSERT only messages.
In Elasticsearch connector, the primary key is used to calculate the Elasticsearch document id, which is a string of up to 512 bytes. It cannot have whitespaces.
The Elasticsearch connector generates a document ID string for every row by concatenating all primary key fields in the order defined in the DDL using a key delimiter specified by document-id.key-delimiter
.
Certain types are not allowed as primary key field as they do not have a good string representation, e.g. BYTES
, ROW
, ARRAY
, MAP
, etc.
If no primary key is specified, Elasticsearch will generate a document id automatically.
See CREATE TABLE DDL for more details about PRIMARY KEY syntax.
Elasticsearch sink supports both static index and dynamic index.
If you want to have a static index, the index
option value should be a plain string, e.g. 'myusers'
, all the records will be consistently written into “myusers” index.
If you want to have a dynamic index, you can use {field_name}
to reference a field value in the record to dynamically generate a target index.
You can also use '{field_name|date_format_string}'
to convert a field value of TIMESTAMP/DATE/TIME
type into the format specified by the date_format_string
.
The date_format_string
is compatible with Java’s DateTimeFormatter.
For example, if the option value is 'myusers-{log_ts|yyyy-MM-dd}'
, then a record with log_ts
field value 2020-03-27 12:25:55
will be written into “myusers-2020-03-27” index.
Elasticsearch stores document in a JSON string. So the data type mapping is between Flink data type and JSON data type.
Flink uses built-in 'json'
format for Elasticsearch connector. Please refer to JSON Format page for more type mapping details.