YARN

Introduction #

Apache Hadoop YARN is a resource provider popular with many data processing frameworks. Flink services are submitted to YARN’s ResourceManager, which spawns containers on machines managed by YARN NodeManagers. Flink deploys its JobManager and TaskManager instances into such containers.

Flink can dynamically allocate and de-allocate TaskManager resources depending on the number of processing slots required by the job(s) running on the JobManager.

Preparation #

This Getting Started section assumes a functional YARN environment, starting from version 2.10.2. YARN environments are provided most conveniently through services such as Amazon EMR, Google Cloud DataProc or products like Cloudera. Manually setting up a YARN environment locally or on a cluster is not recommended for following through this Getting Started tutorial.

  • Make sure your YARN cluster is ready for accepting Flink applications by running yarn top. It should show no error messages.
  • Download a recent Flink distribution from the download page and unpack it.
  • Important Make sure that the HADOOP_CLASSPATH environment variable is set up (it can be checked by running echo $HADOOP_CLASSPATH). If not, set it up using
export HADOOP_CLASSPATH=`hadoop classpath`

Session Mode #

Flink runs on all UNIX-like environments, i.e. Linux, Mac OS X, and Cygwin (for Windows).
You can refer overview to check supported versions and download the binary release of Flink, then extract the archive:

tar -xzf flink-*.tgz

You should set FLINK_HOME environment variables like:

export FLINK_HOME=/path/flink-*

Once you’ve made sure that the HADOOP_CLASSPATH environment variable is set, you can launch a Flink on YARN session:

# we assume to be in the root directory of 
# the unzipped Flink distribution

# export HADOOP_CLASSPATH
export HADOOP_CLASSPATH=`hadoop classpath`

# Start YARN session
./bin/yarn-session.sh --detached

# Stop YARN session (replace the application id based 
# on the output of the yarn-session.sh command)
echo "stop" | ./bin/yarn-session.sh -id application_XXXXX_XXX

After starting YARN session, you can now access the Flink Web UI through the URL printed in the last lines of the command output, or through the YARN ResourceManager web UI.

Then, you need to add some configs to your flink-conf.yaml:

rest.bind-port: {{REST_PORT}}
rest.address: {{NODE_IP}}
execution.target: yarn-session
yarn.application.id: {{YARN_APPLICATION_ID}}

{{REST_PORT}} and {{NODE_IP}} should be replaced by the actual values of your JobManager Web Interface, and {{YARN_APPLICATION_ID}} should be replaced by the actual YARN application ID of Flink.

Download the tar file of Flink CDC from release page, then extract the archive:

tar -xzf flink-cdc-*.tar.gz

Extracted flink-cdc contains four directories: bin,lib,log and conf.

Download the connector jars from release page, and move it to the lib directory.
Download links are available only for stable releases, SNAPSHOT dependencies need to be built based on specific branch by yourself.

Here is an example file for synchronizing the entire database mysql-to-doris.yaml:

################################################################################
# Description: Sync MySQL all tables to Doris
################################################################################
source:
 type: mysql
 hostname: localhost
 port: 3306
 username: root
 password: 123456
 tables: app_db.\.*
 server-id: 5400-5404
 server-time-zone: UTC

sink:
 type: doris
 fenodes: 127.0.0.1:8030
 username: root
 password: ""

pipeline:
 name: Sync MySQL Database to Doris
 parallelism: 2

You need to modify the configuration file according to your needs. Finally, submit job to Flink Yarn Session cluster using Cli.

cd /path/flink-cdc-*
./bin/flink-cdc.sh mysql-to-doris.yaml

After successful submission, the return information is as follows:

Pipeline has been submitted to cluster.
Job ID: ae30f4580f1918bebf16752d4963dc54
Job Description: Sync MySQL Database to Doris

You can find a job named Sync MySQL Database to Doris running through Flink Web UI.

Please note that submitting to application mode cluster and per-job mode cluster are not supported for now.