Guide
This documentation is for an unreleased version of the Apache Flink Kubernetes Operator. We recommend you use the latest stable version.

Development Guide #

We gathered a set of best practices here to aid development.

Local environment setup #

We recommend you install Docker Desktop, minikube and helm on your local machine. For the setup please refer to our quickstart.

Building docker images #

You can build your own flavor of image as follows via specifying your <repo>:

docker build . -t <repo>/flink-kubernetes-operator:latest
docker push <repo>/flink-kubernetes-operator:latest

If you are using minikube you might want to load the image directly instead of pushing it to a registry:

minikube image load <repo>/flink-kubernetes-operator:latest

You can cut a corner via using the docker daemon of your minikube installation directly as follows:

eval $(minikube docker-env)
DOCKER_BUILDKIT=1 docker build . -t <repo>/flink-kubernetes-operator:latest

When you want to reset your environment to the defaults you can do the following:

eval $(minikube docker-env --unset)

The most useful insight when it comes to minikube that it is just a docker container on your local machine and you can ssh to it with the following command in case you needed to hack something there (like adding a hostpath mount or modifying docker images).

minikube ssh
Last login: Wed Mar 9 10:01:21 2022 from 192.168.49.1
docker@minikube:~$ docker images
REPOSITORY                                             TAG                IMAGE ID       CREATED         SIZE
flink-kubernetes-operator                              latest             cf7856d9ef59   23 hours ago    578MB
docker@minikube:~$ exit

Installing the operator locally #

helm install flink-kubernetes-operator helm/flink-kubernetes-operator --set image.repository=<repo>/flink-kubernetes-operator --set image.tag=latest

Running the operator locally #

You can run or debug the FlinkOperator from your preferred IDE. The operator itself is accessing the deployed Flink clusters through the REST interface. When running locally the rest.port, rest.address and kubernetes.rest-service.exposed.type Flink configuration parameters must be modified.

When using minikube tunnel the rest service is exposed on localhost:8081

> minikube tunnel

> kubectl get services
NAME                         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
basic-session-example        ClusterIP      None           <none>        6123/TCP,6124/TCP   14h
basic-session-example-rest   LoadBalancer   10.96.36.250   127.0.0.1     8081:30572/TCP      14h

The operator picks up the default log and flink configurations from /opt/flink/conf. You can put the rest configuration parameters here:

cat /opt/flink/conf/flink-conf.yaml
rest.port: 8081
rest.address: localhost
kubernetes.rest-service.exposed.type: LoadBalancer

Uninstalling the operator locally #

helm uninstall flink-kubernetes-operator

Generating and Upgrading the CRD #

By default, the CRD is generated by the Fabric8 CRDGenerator, when building from source. When installing flink-kubernetes-operator for the first time, the CRD will be applied to the kubernetes cluster automatically. But it will not be removed or upgraded when re-installing the flink-kubernetes-operator, as described in the relevant helm documentation. So if the CRD is changed, you have to delete the CRD resource manually, and re-install the flink-kubernetes-operator.

kubectl delete crd flinkdeployments.flink.apache.org

Mounts #

The operator supports to specify the volume mounts. The default mounts to hostPath can be activated by the following command. You can change the default mounts in the helm/flink-kubernetes-operator/values.yaml

helm install flink-operator helm/flink-operator --set operatorVolumeMounts.create=true --set operatorVolumes.create=true

CI/CD #

We use GitHub Actions to help you automate your software development workflows in the same place you store code and collaborate on pull requests and issues. You can write individual tasks, called actions, and combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub.

Considering the cost of running the builds, the stability, and the maintainability, flink-kubernetes-operator chose GitHub Actions and build the whole CI/CD solution on it. All the unit tests, integration tests, and the end-to-end tests will be triggered for each PR.

Note: Please make sure the CI passed before merging.