Flink’s Table & SQL API makes it possible to work with queries written in the SQL language, but these queries need to be embedded within a table program that is written in either Java or Scala. Moreover, these programs need to be packaged with a build tool before being submitted to a cluster. This more or less limits the usage of Flink to Java/Scala programmers.
The SQL Client aims to provide an easy way of writing, debugging, and submitting table programs to a Flink cluster without a single line of Java or Scala code. The SQL Client CLI allows for retrieving and visualizing real-time results from the running distributed application on the command line.
Attention The SQL Client is in an early development phase. Even though the application is not production-ready yet, it can be a quite useful tool for prototyping and playing around with Flink SQL. In the future, the community plans to extend its functionality by providing a REST-based SQL Client Gateway.
This section describes how to setup and run your first Flink SQL program from the command-line.
The SQL Client is bundled in the regular Flink distribution and thus runnable out-of-the-box. It requires only a running Flink cluster where table programs can be executed. For more information about setting up a Flink cluster see the Cluster & Deployment part. If you simply want to try out the SQL Client, you can also start a local cluster with one worker using the following command:
The SQL Client scripts are also located in the binary directory of Flink. In the future, a user will have two possibilities of starting the SQL Client CLI either by starting an embedded standalone process or by connecting to a remote SQL Client Gateway. At the moment only the embedded
mode is supported. You can start the CLI by calling:
By default, the SQL Client will read its configuration from the environment file located in ./conf/sql-client-defaults.yaml
. See the configuration part for more information about the structure of environment files.
Once the CLI has been started, you can use the HELP
command to list all available SQL statements. For validating your setup and cluster connection, you can enter your first SQL query and press the Enter
key to execute it:
This query requires no table source and produces a single row result. The CLI will retrieve results from the cluster and visualize them. You can close the result view by pressing the Q
key.
The CLI supports two modes for maintaining and visualizing results.
The table mode materializes results in memory and visualizes them in a regular, paginated table representation. It can be enabled by executing the following command in the CLI:
The changelog mode does not materialize results and visualizes the result stream that is produced by a continuous query consisting of insertions (+
) and retractions (-
).
You can use the following query to see both result modes in action:
This query performs a bounded word count example.
In changelog mode, the visualized changelog should be similar to:
In table mode, the visualized result table is continuously updated until the table program ends with:
Both result modes can be useful during the prototyping of SQL queries. In both modes, results are stored in the Java heap memory of the SQL Client. In order to keep the CLI interface responsive, the changelog mode only shows the latest 1000 changes. The table mode allows for navigating through bigger results that are only limited by the available main memory and the configured maximum number of rows (max-table-result-rows
).
Attention Queries that are executed in a batch environment, can only be retrieved using the table
result mode.
After a query is defined, it can be submitted to the cluster as a long-running, detached Flink job. For this, a target system that stores the results needs to be specified using the INSERT INTO statement. The configuration section explains how to declare table sources for reading data, how to declare table sinks for writing data, and how to configure other table program properties.
The SQL Client can be started with the following optional CLI commands. They are discussed in detail in the subsequent paragraphs.
A SQL query needs a configuration environment in which it is executed. The so-called environment files define available table sources and sinks, external catalogs, user-defined functions, and other properties required for execution and deployment.
Every environment file is a regular YAML file. An example of such a file is presented below.
This configuration:
MyTableSource
that reads from a CSV file,MyCustomView
that declares a virtual table using a SQL query,myUDF
that can be instantiated using the class name and two constructor parameters,table
result mode.Depending on the use case, a configuration can be split into multiple files. Therefore, environment files can be created for general purposes (defaults environment file using --defaults
) as well as on a per-session basis (session environment file using --environment
). Every CLI session is initialized with the default properties followed by the session properties. For example, the defaults environment file could specify all table sources that should be available for querying in every session whereas the session environment file only declares a specific state retention time and parallelism. Both default and session environment files can be passed when starting the CLI application. If no default environment file has been specified, the SQL Client searches for ./conf/sql-client-defaults.yaml
in Flink’s configuration directory.
Attention Properties that have been set within a CLI session (e.g. using the SET
command) have highest precedence:
Restart strategies control how Flink jobs are restarted in case of a failure. Similar to global restart strategies for a Flink cluster, a more fine-grained restart configuration can be declared in an environment file.
The following strategies are supported:
The SQL Client does not require to setup a Java project using Maven or SBT. Instead, you can pass the dependencies as regular JAR files that get submitted to the cluster. You can either specify each JAR file separately (using --jar
) or define entire library directories (using --library
). For connectors to external systems (such as Apache Kafka) and corresponding data formats (such as JSON), Flink provides ready-to-use JAR bundles. These JAR files can be downloaded for each release from the Maven central repository.
The full list of offered SQL JARs and documentation about how to use them can be found on the connection to external systems page.
The following example shows an environment file that defines a table source reading JSON data from Apache Kafka.
The resulting schema of the TaxiRide
table contains most of the fields of the JSON schema. Furthermore, it adds a rowtime attribute rowTime
and processing-time attribute procTime
.
Both connector
and format
allow to define a property version (which is currently version 1
) for future backwards compatibility.
The SQL Client allows users to create custom, user-defined functions to be used in SQL queries. Currently, these functions are restricted to be defined programmatically in Java/Scala classes.
In order to provide a user-defined function, you need to first implement and compile a function class that extends ScalarFunction
, AggregateFunction
or TableFunction
(see User-defined Functions). One or more functions can then be packaged into a dependency JAR for the SQL Client.
All functions must be declared in an environment file before being called. For each item in the list of functions
, one must specify
name
under which the function is registered,from
(restricted to be class
for now),class
which indicates the fully qualified class name of the function and an optional list of constructor
parameters for instantiation.Make sure that the order and types of the specified parameters strictly match one of the constructors of your function class.
Depending on the user-defined function, it might be necessary to parameterize the implementation before using it in SQL statements.
As shown in the example before, when declaring a user-defined function, a class can be configured using constructor parameters in one of the following three ways:
A literal value with implicit type: The SQL Client will automatically derive the type according to the literal value itself. Currently, only values of BOOLEAN
, INT
, DOUBLE
and VARCHAR
are supported here.
If the automatic derivation does not work as expected (e.g., you need a VARCHAR false
), use explicit types instead.
A literal value with explicit type: Explicitly declare the parameter with type
and value
properties for type-safety.
The table below illustrates the supported Java parameter types and the corresponding SQL type strings.
Java type | SQL type |
---|---|
java.math.BigDecimal |
DECIMAL |
java.lang.Boolean |
BOOLEAN |
java.lang.Byte |
TINYINT |
java.lang.Double |
DOUBLE |
java.lang.Float |
REAL , FLOAT |
java.lang.Integer |
INTEGER , INT |
java.lang.Long |
BIGINT |
java.lang.Short |
SMALLINT |
java.lang.String |
VARCHAR |
More types (e.g., TIMESTAMP
or ARRAY
), primitive types, and null
are not supported yet.
A (nested) class instance: Besides literal values, you can also create (nested) class instances for constructor parameters by specifying the class
and constructor
properties.
This process can be recursively performed until all the constructor parameters are represented with literal values.
Catalogs can be defined as a set of YAML properties and are automatically registered to the environment upon starting SQL Client.
Users can specify which catalog they want to use as the current catalog in SQL CLI, and which database of the catalog they want to use as the current database.
For more information about catalogs, see Catalogs.
In order to define end-to-end SQL pipelines, SQL’s INSERT INTO
statement can be used for submitting long-running, detached queries to a Flink cluster. These queries produce their results into an external system instead of the SQL Client. This allows for dealing with higher parallelism and larger amounts of data. The CLI itself does not have any control over a detached query after submission.
The table sink MyTableSink
has to be declared in the environment file. See the connection page for more information about supported external systems and their configuration. An example for an Apache Kafka table sink is shown below.
The SQL Client makes sure that a statement is successfully submitted to the cluster. Once the query is submitted, the CLI will show information about the Flink job.
Attention The SQL Client does not track the status of the running Flink job after submission. The CLI process can be shutdown after the submission without affecting the detached query. Flink’s restart strategy takes care of the fault-tolerance. A query can be cancelled using Flink’s web interface, command-line, or REST API.
Views allow to define virtual tables from SQL queries. The view definition is parsed and validated immediately. However, the actual execution happens when the view is accessed during the submission of a general INSERT INTO
or SELECT
statement.
Views can either be defined in environment files or within the CLI session.
The following example shows how to define multiple views in a file. The views are registered in the order in which they are defined in the environment file. Reference chains such as view A depends on view B depends on view C are supported.
Similar to table sources and sinks, views defined in a session environment file have highest precedence.
Views can also be created within a CLI session using the CREATE VIEW
statement:
Views created within a CLI session can also be removed again using the DROP VIEW
statement:
Attention The definition of views in the CLI is limited to the mentioned syntax above. Defining a schema for views or escaping whitespaces in table names will be supported in future versions.
A temporal table allows for a (parameterized) view on a changing history table that returns the content of a table at a specific point in time. This is especially useful for joining a table with the content of another table at a particular timestamp. More information can be found in the temporal table joins page.
The following example shows how to define a temporal table SourceTemporalTable
:
As shown in the example, definitions of table sources, views, and temporal tables can be mixed with each other. They are registered in the order in which they are defined in the environment file. For example, a temporal table can reference a view which can depend on another view or table source.
The current SQL Client implementation is in a very early development stage and might change in the future as part of the bigger Flink Improvement Proposal 24 (FLIP-24). Feel free to join the discussion and open issue about bugs and features that you find useful.