Aliyun Object Storage Service (Aliyun OSS) is widely used, particularly popular among China’s cloud users, and it provides cloud object storage for a variety of use cases. You can use OSS with Flink for reading and writing data as well in conjunction with the streaming state backends
You can use OSS objects like regular files by specifying paths in the following format:
Below shows how to use OSS in a Flink job:
To use flink-oss-fs-hadoop,
copy the respective JAR file from the opt
directory to a directory in plugins
directory of your Flink distribution before starting Flink, e.g.
flink-oss-fs-hadoop
registers default FileSystem wrappers for URIs with the oss:// scheme.
After setting up the OSS FileSystem wrapper, you need to add some configurations to make sure that Flink is allowed to access your OSS buckets.
To allow for easy adoption, you can use the same configuration keys in flink-conf.yaml
as in Hadoop’s core-site.xml
You can see the configuration keys in the Hadoop OSS documentation.
There are some required configurations that must be added to flink-conf.yaml
(Other configurations defined in Hadoop OSS documentation are advanced configurations which used by performance tuning):
An alternative CredentialsProvider
can also be configured in the flink-conf.yaml
, e.g.
Other credential providers can be found under https://github.com/aliyun/aliyun-oss-java-sdk/tree/master/src/main/java/com/aliyun/oss/common/auth.