table.exec.async-lookup.buffer-capacity Batch Streaming |
100 |
Integer |
The max number of async i/o operation that the async lookup join can trigger. |
table.exec.async-lookup.timeout Batch Streaming |
3 min |
Duration |
The async timeout for the asynchronous operation to complete. |
table.exec.disabled-operators Batch |
(none) |
String |
Mainly for testing. A comma-separated list of operator names, each name represents a kind of disabled operator.
Operators that can be disabled include "NestedLoopJoin", "ShuffleHashJoin", "BroadcastHashJoin", "SortMergeJoin", "HashAgg", "SortAgg".
By default no operator is disabled. |
table.exec.mini-batch.allow-latency Streaming |
0 ms |
Duration |
The maximum latency can be used for MiniBatch to buffer input records. MiniBatch is an optimization to buffer input records to reduce state access. MiniBatch is triggered with the allowed latency interval and when the maximum number of buffered records reached. NOTE: If table.exec.mini-batch.enabled is set true, its value must be greater than zero. |
table.exec.mini-batch.enabled Streaming |
false |
Boolean |
Specifies whether to enable MiniBatch optimization. MiniBatch is an optimization to buffer input records to reduce state access. This is disabled by default. To enable this, users should set this config to true. NOTE: If mini-batch is enabled, 'table.exec.mini-batch.allow-latency' and 'table.exec.mini-batch.size' must be set. |
table.exec.mini-batch.size Streaming |
-1 |
Long |
The maximum number of input records can be buffered for MiniBatch. MiniBatch is an optimization to buffer input records to reduce state access. MiniBatch is triggered with the allowed latency interval and when the maximum number of buffered records reached. NOTE: MiniBatch only works for non-windowed aggregations currently. If table.exec.mini-batch.enabled is set true, its value must be positive. |
table.exec.resource.default-parallelism Batch Streaming |
-1 |
Integer |
Sets default parallelism for all operators (such as aggregate, join, filter) to run with parallel instances. This config has a higher priority than parallelism of StreamExecutionEnvironment (actually, this config overrides the parallelism of StreamExecutionEnvironment). A value of -1 indicates that no default parallelism is set, then it will fallback to use the parallelism of StreamExecutionEnvironment. |
table.exec.sink.not-null-enforcer Batch Streaming |
ERROR |
Enum |
The NOT NULL column constraint on a table enforces that null values can't be inserted into the table. Flink supports 'error' (default) and 'drop' enforcement behavior. By default, Flink will check values and throw runtime exception when null values writing into NOT NULL columns. Users can change the behavior to 'drop' to silently drop such records without throwing exception.
Possible values: |
table.exec.sink.upsert-materialize Streaming |
AUTO |
Enum |
Because of the disorder of ChangeLog data caused by Shuffle in distributed system, the data received by Sink may not be the order of global upsert. So add upsert materialize operator before upsert sink. It receives the upstream changelog records and generate an upsert view for the downstream. By default, the materialize operator will be added when a distributed disorder occurs on unique keys. You can also choose no materialization(NONE) or force materialization(FORCE).
Possible values: |
table.exec.sort.async-merge-enabled Batch |
true |
Boolean |
Whether to asynchronously merge sorted spill files. |
table.exec.sort.default-limit Batch |
-1 |
Integer |
Default limit when user don't set a limit after order by. -1 indicates that this configuration is ignored. |
table.exec.sort.max-num-file-handles Batch |
128 |
Integer |
The maximal fan-in for external merge sort. It limits the number of file handles per operator. If it is too small, may cause intermediate merging. But if it is too large, it will cause too many files opened at the same time, consume memory and lead to random reading. |
table.exec.source.cdc-events-duplicate Streaming |
false |
Boolean |
Indicates whether the CDC (Change Data Capture) sources in the job will produce duplicate change events that requires the framework to deduplicate and get consistent result. CDC source refers to the source that produces full change events, including INSERT/UPDATE_BEFORE/UPDATE_AFTER/DELETE, for example Kafka source with Debezium format. The value of this configuration is false by default.
However, it's a common case that there are duplicate change events. Because usually the CDC tools (e.g. Debezium) work in at-least-once delivery when failover happens. Thus, in the abnormal situations Debezium may deliver duplicate change events to Kafka and Flink will get the duplicate events. This may cause Flink query to get wrong results or unexpected exceptions.
Therefore, it is recommended to turn on this configuration if your CDC tool is at-least-once delivery. Enabling this configuration requires to define PRIMARY KEY on the CDC sources. The primary key will be used to deduplicate change events and generate normalized changelog stream at the cost of an additional stateful operator. |
table.exec.source.idle-timeout Streaming |
0 ms |
Duration |
When a source do not receive any elements for the timeout time, it will be marked as temporarily idle. This allows downstream tasks to advance their watermarks without the need to wait for watermarks from this source while it is idle. Default value is 0, which means detecting source idleness is not enabled. |
table.exec.spill-compression.block-size Batch |
64 kb |
MemorySize |
The memory size used to do compress when spilling data. The larger the memory, the higher the compression ratio, but more memory resource will be consumed by the job. |
table.exec.spill-compression.enabled Batch |
true |
Boolean |
Whether to compress spilled data. Currently we only support compress spilled data for sort and hash-agg and hash-join operators. |
table.exec.state.ttl Streaming |
0 ms |
Duration |
Specifies a minimum time interval for how long idle state (i.e. state which was not updated), will be retained. State will never be cleared until it was idle for less than the minimum time, and will be cleared at some time after it was idle. Default is never clean-up the state. NOTE: Cleaning up state requires additional overhead for bookkeeping. Default value is 0, which means that it will never clean up state. |
table.exec.window-agg.buffer-size-limit Batch |
100000 |
Integer |
Sets the window elements buffer size limit used in group window agg operator. |