This documentation is for an out-of-date version of Apache Flink. We recommend you use the latest stable version.
Flink provides a Command-Line Interface (CLI) to run programs that are packaged
as JAR files, and control their execution. The CLI is part
of any Flink setup, available in local single node setups and in
distributed setups. It is located under <flink-home>/bin/flink
and connects by default to the running Flink master (JobManager) that was
started from the same installation directory.
A prerequisite to using the command line interface is that the Flink
master (JobManager) has been started (via
<flink-home>/bin/start-cluster.sh) or that a YARN environment is
available.
Display the optimized execution plan for the WordCount example program as JSON:
./bin/flink info ./examples/batch/WordCount.jar \
--input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
List scheduled and running jobs (including their JobIDs):
./bin/flink list
List scheduled jobs (including their JobIDs):
./bin/flink list -s
List running jobs (including their JobIDs):
./bin/flink list -r
List all existing jobs (including their JobIDs):
./bin/flink list -a
List running Flink jobs inside Flink YARN session:
./bin/flink list -m yarn-cluster -yid <yarnApplicationID> -r
Cancel a job:
./bin/flink cancel <jobID>
Cancel a job with a savepoint:
./bin/flink cancel -s [targetDirectory] <jobID>
Stop a job (streaming jobs only):
./bin/flink stop <jobID>
Modify a running job (streaming jobs only):
./bin/flink modify -p
NOTE: The difference between cancelling and stopping a (streaming) job is the following:
On a cancel call, the operators in a job immediately receive a cancel() method call to cancel them as
soon as possible.
If operators are not not stopping after the cancel call, Flink will start interrupting the thread periodically
until it stops.
A “stop” call is a more graceful way of stopping a running streaming job. Stop is only available for jobs
which use sources that implement the StoppableFunction interface. When the user requests to stop a job,
all sources will receive a stop() method call. The job will keep running until all sources properly shut down.
This allows the job to finish processing all inflight data.
Savepoints
Savepoints are controlled via the command line client:
Trigger a Savepoint
This will trigger a savepoint for the job with ID jobId, and returns the path of the created savepoint. You need this path to restore and dispose savepoints.
Furthermore, you can optionally specify a target file system directory to store the savepoint in. The directory needs to be accessible by the JobManager.
If you don’t specify a target directory, you need to have configured a default directory. Otherwise, triggering the savepoint will fail.
Trigger a Savepoint with YARN
This will trigger a savepoint for the job with ID jobId and YARN application ID yarnAppId, and returns the path of the created savepoint.
Everything else is the same as described in the above Trigger a Savepoint section.
Cancel with a savepoint
You can atomically trigger a savepoint and cancel a job.
If no savepoint directory is configured, you need to configure a default savepoint directory for the Flink installation (see Savepoints).
The job will only be cancelled if the savepoint succeeds.
Restore a savepoint
The run command has a savepoint flag to submit a job, which restores its state from a savepoint. The savepoint path is returned by the savepoint trigger command.
By default, we try to match all savepoint state to the job being submitted. If you want to allow to skip savepoint state that cannot be restored with the new job you can set the allowNonRestoredState flag. You need to allow this if you removed an operator from your program that was part of the program when the savepoint was triggered and you still want to use the savepoint.
This is useful if your program dropped an operator that was part of the savepoint.
Dispose a savepoint
Disposes the savepoint at the given path. The savepoint path is returned by the savepoint trigger command.
If you use custom state instances (for example custom reducing state or RocksDB state), you have to specify the path to the program JAR with which the savepoint was triggered in order to dispose the savepoint with the user code class loader:
Otherwise, you will run into a ClassNotFoundException.