@Internal public class KafkaDynamicSink extends Object implements DynamicTableSink, SupportsWritingMetadata
DynamicTableSink
.DynamicTableSink.Context, DynamicTableSink.DataStructureConverter, DynamicTableSink.SinkRuntimeProvider
Modifier and Type | Field and Description |
---|---|
protected DataType |
consumedDataType
Data type of consumed data type.
|
protected SinkBufferFlushMode |
flushMode
Sink buffer flush config which only supported in upsert mode now.
|
protected EncodingFormat<SerializationSchema<RowData>> |
keyEncodingFormat
Optional format for encoding keys to Kafka.
|
protected String |
keyPrefix
Prefix that needs to be removed from fields when constructing the physical data type.
|
protected int[] |
keyProjection
Indices that determine the key fields and the source position in the consumed row.
|
protected List<String> |
metadataKeys
Metadata that is appended at the end of a physical sink row.
|
protected Integer |
parallelism
Parallelism of the physical Kafka producer.
|
protected FlinkKafkaPartitioner<RowData> |
partitioner
Partitioner to select Kafka partition for each item.
|
protected DataType |
physicalDataType
Data type to configure the formats.
|
protected Properties |
properties
Properties for the Kafka producer.
|
protected String |
topic
The Kafka topic to write to.
|
protected boolean |
upsertMode
Flag to determine sink mode.
|
protected EncodingFormat<SerializationSchema<RowData>> |
valueEncodingFormat
Format for encoding values to Kafka.
|
protected int[] |
valueProjection
Indices that determine the value fields and the source position in the consumed row.
|
Constructor and Description |
---|
KafkaDynamicSink(DataType consumedDataType,
DataType physicalDataType,
EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat,
EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat,
int[] keyProjection,
int[] valueProjection,
String keyPrefix,
String topic,
Properties properties,
FlinkKafkaPartitioner<RowData> partitioner,
DeliveryGuarantee deliveryGuarantee,
boolean upsertMode,
SinkBufferFlushMode flushMode,
Integer parallelism,
String transactionalIdPrefix) |
Modifier and Type | Method and Description |
---|---|
void |
applyWritableMetadata(List<String> metadataKeys,
DataType consumedDataType)
Provides a list of metadata keys that the consumed
RowData will contain as appended
metadata columns which must be persisted. |
String |
asSummaryString()
Returns a string that summarizes this sink for printing to a console or log.
|
DynamicTableSink |
copy()
Creates a copy of this instance during planning.
|
boolean |
equals(Object o) |
ChangelogMode |
getChangelogMode(ChangelogMode requestedMode)
Returns the set of changes that the sink accepts during runtime.
|
DynamicTableSink.SinkRuntimeProvider |
getSinkRuntimeProvider(DynamicTableSink.Context context)
Returns a provider of runtime implementation for writing the data.
|
int |
hashCode() |
Map<String,DataType> |
listWritableMetadata()
Returns the map of metadata keys and their corresponding data types that can be consumed by
this table sink for writing.
|
protected List<String> metadataKeys
protected DataType consumedDataType
protected final DataType physicalDataType
@Nullable protected final EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat
protected final EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat
protected final int[] keyProjection
protected final int[] valueProjection
@Nullable protected final String keyPrefix
protected final String topic
protected final Properties properties
@Nullable protected final FlinkKafkaPartitioner<RowData> partitioner
protected final boolean upsertMode
protected final SinkBufferFlushMode flushMode
public KafkaDynamicSink(DataType consumedDataType, DataType physicalDataType, @Nullable EncodingFormat<SerializationSchema<RowData>> keyEncodingFormat, EncodingFormat<SerializationSchema<RowData>> valueEncodingFormat, int[] keyProjection, int[] valueProjection, @Nullable String keyPrefix, String topic, Properties properties, @Nullable FlinkKafkaPartitioner<RowData> partitioner, DeliveryGuarantee deliveryGuarantee, boolean upsertMode, SinkBufferFlushMode flushMode, @Nullable Integer parallelism, @Nullable String transactionalIdPrefix)
public ChangelogMode getChangelogMode(ChangelogMode requestedMode)
DynamicTableSink
The planner can make suggestions but the sink has the final decision what it requires. If
the planner does not support this mode, it will throw an error. For example, the sink can
return that it only supports ChangelogMode.insertOnly()
.
getChangelogMode
in interface DynamicTableSink
requestedMode
- expected set of changes by the current planpublic DynamicTableSink.SinkRuntimeProvider getSinkRuntimeProvider(DynamicTableSink.Context context)
DynamicTableSink
There might exist different interfaces for runtime implementation which is why DynamicTableSink.SinkRuntimeProvider
serves as the base interface. Concrete DynamicTableSink.SinkRuntimeProvider
interfaces might be located in other Flink modules.
Independent of the provider interface, the table runtime expects that a sink
implementation accepts internal data structures (see RowData
for more information).
The given DynamicTableSink.Context
offers utilities by the planner for creating runtime
implementation with minimal dependencies to internal data structures.
SinkProvider
is the recommended core interface. SinkFunctionProvider
in
flink-table-api-java-bridge
and OutputFormatProvider
are available for
backwards compatibility.
getSinkRuntimeProvider
in interface DynamicTableSink
SinkProvider
public Map<String,DataType> listWritableMetadata()
SupportsWritingMetadata
The returned map will be used by the planner for validation and insertion of explicit
casts (see LogicalTypeCasts.supportsExplicitCast(LogicalType, LogicalType)
) if
necessary.
The iteration order of the returned map determines the order of metadata keys in the list
passed in SupportsWritingMetadata.applyWritableMetadata(List, DataType)
. Therefore, it might be beneficial
to return a LinkedHashMap
if a strict metadata column order is required.
If a sink forwards metadata to one or more formats, we recommend the following column order for consistency:
KEY FORMAT METADATA COLUMNS + VALUE FORMAT METADATA COLUMNS + SINK METADATA COLUMNS
Metadata key names follow the same pattern as mentioned in Factory
. In case of
duplicate names in format and sink keys, format keys shall have higher precedence.
Regardless of the returned DataType
s, a metadata column is always represented
using internal data structures (see RowData
).
listWritableMetadata
in interface SupportsWritingMetadata
EncodingFormat.listWritableMetadata()
public void applyWritableMetadata(List<String> metadataKeys, DataType consumedDataType)
SupportsWritingMetadata
RowData
will contain as appended
metadata columns which must be persisted.applyWritableMetadata
in interface SupportsWritingMetadata
metadataKeys
- a subset of the keys returned by SupportsWritingMetadata.listWritableMetadata()
, ordered
by the iteration order of returned mapconsumedDataType
- the final input type of the sink, it is intended to be only forwarded
and the planner will decide on the field names to avoid collisionsEncodingFormat.applyWritableMetadata(List)
public DynamicTableSink copy()
DynamicTableSink
copy
in interface DynamicTableSink
public String asSummaryString()
DynamicTableSink
asSummaryString
in interface DynamicTableSink
Copyright © 2014–2024 The Apache Software Foundation. All rights reserved.