OSS

OSS #

Download

Download flink table store shaded jar for Spark, Hive and Trino.

Usage #

Prepare OSS jar, then configure flink-conf.yaml like

fs.oss.endpoint: oss-cn-hangzhou.aliyuncs.com
fs.oss.accessKeyId: xxx
fs.oss.accessKeySecret: yyy

Place flink-table-store-oss-0.3.0.jar together with flink-table-store-spark-0.3.0.jar under Spark’s jars directory, and start like

spark-sql \ 
  --conf spark.sql.catalog.tablestore=org.apache.flink.table.store.spark.SparkCatalog \
  --conf spark.sql.catalog.tablestore.warehouse=oss://<bucket-name>/ \
  --conf spark.sql.catalog.tablestore.fs.oss.endpoint=oss-cn-hangzhou.aliyuncs.com \
  --conf spark.sql.catalog.tablestore.fs.oss.accessKeyId=xxx \
  --conf spark.sql.catalog.tablestore.fs.oss.accessKeySecret=yyy

NOTE: You need to ensure that Hive metastore can access oss.

Place flink-table-store-oss-0.3.0.jar together with flink-table-store-hive-connector-0.3.0.jar under Hive’s auxlib directory, and start like

SET tablestore.fs.oss.endpoint=oss-cn-hangzhou.aliyuncs.com;
SET tablestore.fs.oss.accessKeyId=xxx;
SET tablestore.fs.oss.accessKeySecret=yyy;

And read table from hive metastore, table can be created by Flink or Spark, see Catalog with Hive Metastore

SELECT * FROM test_table;
SELECT COUNT(1) FROM test_table;

Place flink-table-store-oss-0.3.0.jar together with flink-table-store-trino-0.3.0.jar under plugin/tablestore directory.

Add options in etc/catalog/tablestore.properties.

fs.oss.endpoint=oss-cn-hangzhou.aliyuncs.com
fs.oss.accessKeyId=xxx
fs.oss.accessKeySecret=yyy