This page describes how to deploy a Flink job and session cluster on Kubernetes.
Please follow Kubernetes’ setup guide in order to deploy a Kubernetes cluster. If you want to run Kubernetes locally, we recommend using MiniKube.
minikube ssh 'sudo ip link set docker0 promisc on'
before deploying a Flink cluster.
Otherwise Flink components are not able to self reference themselves through a Kubernetes service.
A Flink session cluster is executed as a long-running Kubernetes Deployment. Note that you can run multiple Flink jobs on a session cluster. Each job needs to be submitted to the cluster after the cluster has been deployed.
A basic Flink session cluster deployment in Kubernetes has three components:
Using the resource definitions for a session cluster, launch the cluster with the kubectl
command:
kubectl create -f jobmanager-service.yaml
kubectl create -f jobmanager-deployment.yaml
kubectl create -f taskmanager-deployment.yaml
You can then access the Flink UI via kubectl proxy
:
kubectl proxy
in a terminalIn order to terminate the Flink session cluster, use kubectl
:
kubectl delete -f jobmanager-deployment.yaml
kubectl delete -f taskmanager-deployment.yaml
kubectl delete -f jobmanager-service.yaml
A Flink job cluster is a dedicated cluster which runs a single job. The job is part of the image and, thus, there is no extra job submission needed.
The Flink job cluster image needs to contain the user code jars of the job for which the cluster is started. Therefore, one needs to build a dedicated container image for every job. Please follow these instructions to build the Docker image.
In order to deploy the a job cluster on Kubernetes please follow these instructions.
An early version of a Flink Helm chart is available on GitHub.
The Deployment definitions use the pre-built image flink:latest
which can be found on Docker Hub.
The image is built from this Github repository.
jobmanager-deployment.yaml
taskmanager-deployment.yaml
jobmanager-service.yaml