This documentation provides instructions on how to setup Flink fully automatically with Hadoop 1 or Hadoop 2 on top of a Google Compute Engine cluster. This is made possible by Google’s bdutil which starts a cluster and deploys Flink with Hadoop. To get started, just follow the steps below.
Please follow the instructions on how to setup the Google Cloud SDK. In particular, make sure to authenticate with Google Cloud using the following command:
gcloud auth login
At the moment, there is no bdutil release yet which includes the Flink extension. However, you can get the latest version of bdutil with Flink support from GitHub:
git clone https://github.com/GoogleCloudPlatform/bdutil.git
After you have downloaded the source, change into the newly created
bdutil directory and continue with the next steps.
If you have not done so, create a bucket for the bdutil config and staging files. A new bucket can be created with gsutil:
gsutil mb gs://<bucket_name>
To deploy Flink with bdutil, adapt at least the following variables in bdutil_env.sh.
CONFIGBUCKET="<bucket_name>" PROJECT="<compute_engine_project_name>" NUM_WORKERS=<number_of_workers> # set this to 'n1-standard-2' if you're using the free trial GCE_MACHINE_TYPE="<gce_machine_type>" # for example: "europe-west1-d" GCE_ZONE="<gce_zone>"
bdutil’s Flink extension handles the configuration for you. You may additionally adjust configuration variables in
extensions/flink/flink_env.sh. If you want to make further configuration, please take a look at configuring Flink. You will have to restart Flink after changing its configuration using
To bring up the Flink cluster on Google Compute Engine, execute:
./bdutil -e extensions/flink/flink_env.sh deploy
./bdutil shell cd /home/hadoop/flink-install/bin ./flink run ../examples/batch/WordCount.jar gs://dataflow-samples/shakespeare/othello.txt gs://<bucket_name>/output
Shutting down a cluster is as simple as executing
./bdutil -e extensions/flink/flink_env.sh delete