OSS
This documentation is for an unreleased version of Apache Flink Table Store. We recommend you use the latest stable version.

OSS #

Build

To build from source code, either download the source of a release or clone the git repository.

Build shaded jar with the following command.

mvn clean install -DskipTests

You can find the shaded jars under ./flink-table-store-filesystems/flink-table-store-oss/target/flink-table-store-oss-0.4-SNAPSHOT.jar.

Usage #

Prepare OSS jar, then configure flink-conf.yaml like

fs.oss.endpoint: oss-cn-hangzhou.aliyuncs.com
fs.oss.accessKeyId: xxx
fs.oss.accessKeySecret: yyy

Place flink-table-store-oss-0.4-SNAPSHOT.jar together with flink-table-store-spark-0.4-SNAPSHOT.jar under Spark’s jars directory, and start like

spark-sql \ 
  --conf spark.sql.catalog.tablestore=org.apache.flink.table.store.spark.SparkCatalog \
  --conf spark.sql.catalog.tablestore.warehouse=oss://<bucket-name>/ \
  --conf spark.sql.catalog.tablestore.fs.oss.endpoint=oss-cn-hangzhou.aliyuncs.com \
  --conf spark.sql.catalog.tablestore.fs.oss.accessKeyId=xxx \
  --conf spark.sql.catalog.tablestore.fs.oss.accessKeySecret=yyy

NOTE: You need to ensure that Hive metastore can access oss.

Place flink-table-store-oss-0.4-SNAPSHOT.jar together with flink-table-store-hive-connector-0.4-SNAPSHOT.jar under Hive’s auxlib directory, and start like

SET tablestore.fs.oss.endpoint=oss-cn-hangzhou.aliyuncs.com;
SET tablestore.fs.oss.accessKeyId=xxx;
SET tablestore.fs.oss.accessKeySecret=yyy;

And read table from hive metastore, table can be created by Flink or Spark, see Catalog with Hive Metastore

SELECT * FROM test_table;
SELECT COUNT(1) FROM test_table;

Place flink-table-store-oss-0.4-SNAPSHOT.jar together with flink-table-store-trino-0.4-SNAPSHOT.jar under plugin/tablestore directory.

Add options in etc/catalog/tablestore.properties.

fs.oss.endpoint=oss-cn-hangzhou.aliyuncs.com
fs.oss.accessKeyId=xxx
fs.oss.accessKeySecret=yyy