This documentation is for an out-of-date version of Apache Flink. We recommend you use the latest stable version.
Flink provides a Command-Line Interface (CLI) to run programs that are packaged
as JAR files, and control their execution. The CLI is part
of any Flink setup, available in local single node setups and in
distributed setups. It is located under <flink-home>/bin/flink
and connects by default to the running Flink master (JobManager) that was
started from the same installation directory.
The command line can be used to
submit jobs for execution,
cancel a running job,
provide information about a job,
list running and waiting jobs,
trigger and dispose savepoints, and
A prerequisite to using the command line interface is that the Flink
master (JobManager) has been started (via
<flink-home>/bin/start-cluster.sh) or that another deployment target such as YARN or Kubernetes is
available.
Deployment targets
Flink has the concept of executors for defining available deployment targets. You can see the
available executors in the output of bin/flink --help, for example:
Options for executor mode:
-D <property=value> Generic configuration options for
execution/deployment and for the configured executor.
The available options can be found at
https://ci.apache.org/projects/flink/flink-docs-stabl
e/ops/config.html
-e,--executor <arg> The name of the executor to be used for executing the
given job, which is equivalent to the
"execution.target" config option. The currently
available executors are: "remote", "local",
"kubernetes-session", "yarn-per-job", "yarn-session".
When running one of the bin/flink actions, the executor is specified using the --executor
option.
Note When submitting Python job via flink run, Flink will run the command “python”. Please run the following command to confirm that the command “python” in current environment points to a specified Python version 3.5, 3.6 or 3.7:
Run Python Table program:
./bin/flink run -py examples/python/table/batch/word_count.py
Run Python Table program with pyFiles:
./bin/flink run -py examples/python/table/batch/word_count.py \
-pyfs file:///user.txt,hdfs:///$namenode_address/username.txt
Run Python Table program with a JAR file:
./bin/flink run -py examples/python/table/batch/word_count.py -j <jarFile>
Run Python Table program with pyFiles and pyModule:
./bin/flink run -pym batch.word_count -pyfs examples/python/table/batch
Run Python Table program with parallelism 16:
./bin/flink run -p 16 -py examples/python/table/batch/word_count.py
Run Python Table program with flink log output disabled:
./bin/flink run -q -py examples/python/table/batch/word_count.py
Run Python Table program in detached mode:
./bin/flink run -d -py examples/python/table/batch/word_count.py
Run Python Table program on a specific JobManager:
./bin/flink run -m myJMHost:8081 \
-py examples/python/table/batch/word_count.py
Savepoints are controlled via the command line client:
Trigger a Savepoint
This will trigger a savepoint for the job with ID jobId, and returns the path of the created savepoint. You need this path to restore and dispose savepoints.
Furthermore, you can optionally specify a target file system directory to store the savepoint in. The directory needs to be accessible by the JobManager.
If you don’t specify a target directory, you need to have configured a default directory. Otherwise, triggering the savepoint will fail.
Trigger a Savepoint with YARN
This will trigger a savepoint for the job with ID jobId and YARN application ID yarnAppId, and returns the path of the created savepoint.
Everything else is the same as described in the above Trigger a Savepoint section.
Stop
Use the stop to gracefully stop a running streaming job with a savepoint.
A “stop” call is a more graceful way of stopping a running streaming job, as the “stop” signal flows from
source to sink. When the user requests to stop a job, all sources will be requested to send the last checkpoint barrier
that will trigger a savepoint, and after the successful completion of that savepoint, they will finish by calling their
cancel() method. If the -d flag is specified, then a MAX_WATERMARK will be emitted before the last checkpoint
barrier. This will result all registered event-time timers to fire, thus flushing out any state that is waiting for
a specific watermark, e.g. windows. The job will keep running until all sources properly shut down. This allows the
job to finish processing all in-flight data.
Cancel with a savepoint (deprecated)
You can atomically trigger a savepoint and cancel a job.
If no savepoint directory is configured, you need to configure a default savepoint directory for the Flink installation (see Savepoints).
The job will only be cancelled if the savepoint succeeds.
Note: Cancelling a job with savepoint is deprecated. Use "stop" instead.
Restore a savepoint
The run command has a savepoint flag to submit a job, which restores its state from a savepoint. The savepoint path is returned by the savepoint trigger command.
By default, we try to match all savepoint state to the job being submitted. If you want to allow to skip savepoint state that cannot be restored with the new job you can set the allowNonRestoredState flag. You need to allow this if you removed an operator from your program that was part of the program when the savepoint was triggered and you still want to use the savepoint.
This is useful if your program dropped an operator that was part of the savepoint.
Dispose a savepoint
Disposes the savepoint at the given path. The savepoint path is returned by the savepoint trigger command.
If you use custom state instances (for example custom reducing state or RocksDB state), you have to specify the path to the program JAR with which the savepoint was triggered in order to dispose the savepoint with the user code class loader:
Otherwise, you will run into a ClassNotFoundException.