pyflink.table.schema.Schema.Builder.column_by_metadata#
- Builder.column_by_metadata(column_name: str, data_type: Union[pyflink.table.types.DataType, str], metadata_key: Optional[str] = None, is_virtual: bool = False) pyflink.table.schema.Schema.Builder #
Declares a metadata column that is appended to this schema.
Metadata columns allow to access connector and/or format specific fields for every row of a table. For example, a metadata column can be used to read and write the timestamp from and to Kafka records for time-based operations. The connector and format documentation lists the available metadata fields for every component.
Every metadata field is identified by a string-based key and has a documented data type. The metadata key can be omitted if the column name should be used as the identifying metadata key. For convenience, the runtime will perform an explicit cast if the data type of the column differs from the data type of the metadata field. Of course, this requires that the two data types are compatible.
By default, a metadata column can be used for both reading and writing. However, in many cases an external system provides more read-only metadata fields than writable fields. Therefore, it is possible to exclude metadata columns from persisting by setting the {@code is_virtual} flag to {@code true}.
- Parameters
column_name – Column name
data_type – Data type of the column
metadata_key – Identifying metadata key, if null the column name will be used as metadata key
is_virtual – Whether the column should be persisted or not