A typical hive job is scheduled periodically to execute, so there will be a large delay.
Flink supports to write, read and join the hive table in the form of streaming.
There are three types of streaming:
The Hive table supports streaming writes, based on Filesystem Streaming Sink.
The Hive Streaming Sink re-use Filesystem Streaming Sink to integrate Hadoop OutputFormat/RecordWriter to streaming writing. Hadoop RecordWriters are Bulk-encoded Formats, Bulk Formats rolls files on every checkpoint.
By default, now only have renaming committer, this means S3 filesystem can not supports exactly-once,
if you want to use Hive streaming sink in S3 filesystem, You can configure the following parameter to
false to use Flink native writers (only work for parquet and orc) in TableConfig
(note that these
parameters affect all sinks of the job):
Key | Default | Type | Description |
---|---|---|---|
table.exec.hive.fallback-mapred-writer |
true | Boolean | If it is false, using flink native writer to write parquet and orc files; if it is true, using hadoop mapred record writer to write parquet and orc files. |
The below shows how the streaming sink can be used to write a streaming query to write data from Kafka into a Hive table with partition-commit, and runs a batch query to read that data back out.
To improve the real-time performance of hive reading, Flink support real-time Hive table stream read:
You can even use the 10 minute level partition strategy, and use Flink’s Hive streaming reading and Hive streaming writing to greatly improve the real-time performance of Hive data warehouse to quasi real-time minute level.
Key | Default | Type | Description |
---|---|---|---|
streaming-source.enable |
false | Boolean | Enable streaming source or not. NOTES: Please make sure that each partition/file should be written atomically, otherwise the reader may get incomplete data. |
streaming-source.monitor-interval |
1 m | Duration | Time interval for consecutively monitoring partition/file. |
streaming-source.consume-order |
create-time | String | The consume order of streaming source, support create-time and partition-time. create-time compare partition/file creation time, this is not the partition create time in Hive metaStore, but the folder/file modification time in filesystem; partition-time compare time represented by partition name, if the partition folder somehow gets updated, e.g. add new file into folder, it can affect how the data is consumed. For non-partition table, this value should always be 'create-time'. |
streaming-source.consume-start-offset |
1970-00-00 | String | Start offset for streaming consuming. How to parse and compare offsets depends on your order. For create-time and partition-time, should be a timestamp string (yyyy-[m]m-[d]d [hh:mm:ss]). For partition-time, will use partition time extractor to extract time from partition. |
Note:
The below shows how to read Hive table incrementally.
You can use a Hive table as temporal table and join streaming data with it. Please follow the example to find out how to join a temporal table.
When performing the join, the Hive table will be cached in TM memory and each record from the stream is looked up in the Hive table to decide whether a match is found. You don’t need any extra settings to use a Hive table as temporal table. But optionally, you can configure the TTL of the Hive table cache with the following property. After the cache expires, the Hive table will be scanned again to load the latest data.
Key | Default | Type | Description |
---|---|---|---|
lookup.join.cache.ttl |
60 min | Duration | The cache TTL (e.g. 10min) for the build table in lookup join. By default the TTL is 60 minutes. |
Note:
lookup.join.cache.ttl
. You’ll probably have performance issue if
your Hive table needs to be updated and reloaded too frequently.