@PublicEvolving public interface Table extends Explainable<Table>, Executable
Table
object is the core abstraction of the Table API. Similar to how the DataStream
API has DataStream
s, the Table API is built around Table
s.
A Table
object describes a pipeline of data transformations. It does not contain the
data itself in any way. Instead, it describes how to read data from a DynamicTableSource
and how to eventually write data to a DynamicTableSink
. The declared pipeline can be
printed, optimized, and eventually executed in a cluster. The pipeline can work with bounded or
unbounded streams which enables both streaming and batch scenarios.
By the definition above, a Table
object can actually be considered as a view in
SQL terms.
The initial Table
object is constructed by a TableEnvironment
. For example,
TableEnvironment.from(String)
) obtains a table from a catalog. Every Table
object
has a schema that is available through getResolvedSchema()
. A Table
object is
always associated with its original table environment during programming.
Every transformation (i.e. select(Expression...)
or filter(Expression)
) on a
Table
object leads to a new Table
object.
Use Executable.execute()
to execute the pipeline and retrieve the transformed data locally
during development. Otherwise, use executeInsert(String)
to write the data into a table
sink.
Many methods of this class take one or more Expression
s as parameters. For fluent
definition of expressions and easier readability, we recommend to add a star import:
import static org.apache.flink.table.api.Expressions.*;
Check the documentation for more programming language specific APIs, for example, by using Scala implicits.
The following example shows how to work with a Table
object.
Java Example (with static import for expressions):
TableEnvironment tableEnv = TableEnvironment.create(...);
Table table = tableEnv.from("MyTable").select($("colA").trim(), $("colB").plus(12));
table.execute().print();
Scala Example (with implicits for expressions):
val tableEnv = TableEnvironment.create(...)
val table = tableEnv.from("MyTable").select($"colA".trim(), $"colB" + 12)
table.execute().print()
Modifier and Type | Method and Description |
---|---|
Table |
addColumns(Expression... fields)
Adds additional columns.
|
Table |
addOrReplaceColumns(Expression... fields)
Adds additional columns.
|
AggregatedTable |
aggregate(Expression aggregateFunction)
Performs a global aggregate operation with an aggregate function.
|
Table |
as(Expression... fields)
Deprecated.
|
Table |
as(String field,
String... fields)
Renames the fields of the expression result.
|
TemporalTableFunction |
createTemporalTableFunction(Expression timeAttribute,
Expression primaryKey)
Creates
TemporalTableFunction backed up by this table as a history table. |
Table |
distinct()
Removes duplicate values and returns only distinct (different) values.
|
Table |
dropColumns(Expression... fields)
Drops existing columns.
|
default TableResult |
executeInsert(String tablePath)
Shorthand for
tableEnv.insertInto(tablePath).execute() . |
default TableResult |
executeInsert(String tablePath,
boolean overwrite)
Shorthand for
tableEnv.insertInto(tablePath, overwrite).execute() . |
default TableResult |
executeInsert(TableDescriptor descriptor)
Shorthand for
tableEnv.insertInto(descriptor).execute() . |
default TableResult |
executeInsert(TableDescriptor descriptor,
boolean overwrite)
Shorthand for
tableEnv.insertInto(descriptor, overwrite).execute() . |
Table |
fetch(int fetch)
Limits a (possibly sorted) result to the first n rows.
|
Table |
filter(Expression predicate)
Filters out elements that don't pass the filter predicate.
|
FlatAggregateTable |
flatAggregate(Expression tableAggregateFunction)
Perform a global flatAggregate without groupBy.
|
Table |
flatMap(Expression tableFunction)
Performs a flatMap operation with an user-defined table function or built-in table function.
|
Table |
fullOuterJoin(Table right,
Expression joinPredicate)
Joins two
Table s. |
QueryOperation |
getQueryOperation()
Returns underlying logical representation of this table.
|
ResolvedSchema |
getResolvedSchema()
Returns the resolved schema of this table.
|
default TableSchema |
getSchema()
Deprecated.
This method has been deprecated as part of FLIP-164.
TableSchema has been
replaced by two more dedicated classes Schema and ResolvedSchema . Use
Schema for declaration in APIs. ResolvedSchema is offered by the
framework after resolution and validation. |
GroupedTable |
groupBy(Expression... fields)
Groups the elements on some grouping keys.
|
TablePipeline |
insertInto(String tablePath)
Declares that the pipeline defined by the given
Table object should be written to a
table (backed by a DynamicTableSink ) that was registered under the specified path. |
TablePipeline |
insertInto(String tablePath,
boolean overwrite)
Declares that the pipeline defined by the given
Table object should be written to a
table (backed by a DynamicTableSink ) that was registered under the specified path. |
TablePipeline |
insertInto(TableDescriptor descriptor)
Declares that the pipeline defined by the given
Table object should be written to a
table (backed by a DynamicTableSink ) expressed via the given TableDescriptor . |
TablePipeline |
insertInto(TableDescriptor descriptor,
boolean overwrite)
Declares that the pipeline defined by the given
Table object should be written to a
table (backed by a DynamicTableSink ) expressed via the given TableDescriptor . |
Table |
intersect(Table right)
Intersects two
Table s with duplicate records removed. |
Table |
intersectAll(Table right)
Intersects two
Table s. |
Table |
join(Table right)
Joins two
Table s. |
Table |
join(Table right,
Expression joinPredicate)
Joins two
Table s. |
Table |
joinLateral(Expression tableFunctionCall)
Joins this
Table with an user-defined TableFunction . |
Table |
joinLateral(Expression tableFunctionCall,
Expression joinPredicate)
Joins this
Table with an user-defined TableFunction . |
Table |
leftOuterJoin(Table right)
Joins two
Table s. |
Table |
leftOuterJoin(Table right,
Expression joinPredicate)
Joins two
Table s. |
Table |
leftOuterJoinLateral(Expression tableFunctionCall)
Joins this
Table with an user-defined TableFunction . |
Table |
leftOuterJoinLateral(Expression tableFunctionCall,
Expression joinPredicate)
Joins this
Table with an user-defined TableFunction . |
default Table |
limit(int fetch)
Limits a (possibly sorted) result to the first n rows.
|
default Table |
limit(int offset,
int fetch)
Limits a (possibly sorted) result to the first n rows from an offset position.
|
Table |
map(Expression mapFunction)
Performs a map operation with an user-defined scalar function or built-in scalar function.
|
Table |
minus(Table right)
Minus of two
Table s with duplicate records removed. |
Table |
minusAll(Table right)
Minus of two
Table s. |
Table |
offset(int offset)
Limits a (possibly sorted) result from an offset position.
|
Table |
orderBy(Expression... fields)
Sorts the given
Table . |
void |
printSchema()
Prints the schema of this table to the console in a summary format.
|
Table |
renameColumns(Expression... fields)
Renames existing columns.
|
Table |
rightOuterJoin(Table right,
Expression joinPredicate)
Joins two
Table s. |
Table |
select(Expression... fields)
Performs a selection operation.
|
Table |
union(Table right)
Unions two
Table s with duplicate records removed. |
Table |
unionAll(Table right)
Unions two
Table s. |
Table |
where(Expression predicate)
Filters out elements that don't pass the filter predicate.
|
GroupWindowedTable |
window(GroupWindow groupWindow)
Groups the records of a table by assigning them to windows defined by a time or row interval.
|
OverWindowedTable |
window(OverWindow... overWindows)
Defines over-windows on the records of a table.
|
explain, explain, printExplain
execute
@Deprecated default TableSchema getSchema()
TableSchema
has been
replaced by two more dedicated classes Schema
and ResolvedSchema
. Use
Schema
for declaration in APIs. ResolvedSchema
is offered by the
framework after resolution and validation.ResolvedSchema getResolvedSchema()
void printSchema()
QueryOperation getQueryOperation()
Table select(Expression... fields)
Java Example:
tab.select($("key"), $("value").avg().plus(" The average").as("average"));
Scala Example:
tab.select($"key", $"value".avg + " The average" as "average")
TemporalTableFunction createTemporalTableFunction(Expression timeAttribute, Expression primaryKey)
TemporalTableFunction
backed up by this table as a history table. Temporal
Tables represent a concept of a table that changes over time and for which Flink keeps track
of those changes. TemporalTableFunction
provides a way how to access those data.
For more information please check Flink's documentation on Temporal Tables.
Currently TemporalTableFunction
s are only supported in streaming.
timeAttribute
- Must points to a time indicator. Provides a way to compare which records
are a newer or older version.primaryKey
- Defines the primary key. With primary key it is possible to update a row or
to delete it.TemporalTableFunction
which is an instance of TableFunction
. It takes
one single argument, the timeAttribute
, for which it returns matching version of
the Table
, from which TemporalTableFunction
was created.Table as(String field, String... fields)
Example:
tab.as("a", "b")
@Deprecated Table as(Expression... fields)
as(String, String...)
Java Example:
tab.as($("a"), $("b"))
Scala Example:
tab.as($"a", $"b")
Table filter(Expression predicate)
Java Example:
tab.filter($("name").isEqual("Fred"));
Scala Example:
tab.filter($"name" === "Fred")
Table where(Expression predicate)
Java Example:
tab.where($("name").isEqual("Fred"));
Scala Example:
tab.where($"name" === "Fred")
GroupedTable groupBy(Expression... fields)
Java Example:
tab.groupBy($("key")).select($("key"), $("value").avg());
Scala Example:
tab.groupBy($"key").select($"key", $"value".avg)
Table distinct()
Example:
tab.select($("key"), $("value")).distinct();
Table join(Table right)
Table
s. Similar to a SQL join. The fields of the two joined operations must
not overlap, use as
to rename fields if necessary. You can use where and select
clauses after a join to further specify the behaviour of the join.
Note: Both tables must be bound to the same TableEnvironment
.
Example:
left.join(right)
.where($("a").isEqual($("b")).and($("c").isGreater(3))
.select($("a"), $("b"), $("d"));
Table join(Table right, Expression joinPredicate)
Table
s. Similar to a SQL join. The fields of the two joined operations must
not overlap, use as
to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment
.
Java Example:
left.join(right, $("a").isEqual($("b")))
.select($("a"), $("b"), $("d"));
Scala Example:
left.join(right, $"a" === $"b")
.select($"a", $"b", $"d")
Table leftOuterJoin(Table right)
Table
s. Similar to a SQL left outer join. The fields of the two joined
operations must not overlap, use as
to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment
and its TableConfig
must have null check enabled (default).
Example:
left.leftOuterJoin(right)
.select($("a"), $("b"), $("d"));
Table leftOuterJoin(Table right, Expression joinPredicate)
Table
s. Similar to a SQL left outer join. The fields of the two joined
operations must not overlap, use as
to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment
and its TableConfig
must have null check enabled (default).
Java Example:
left.leftOuterJoin(right, $("a").isEqual($("b")))
.select($("a"), $("b"), $("d"));
Scala Example:
left.leftOuterJoin(right, $"a" === $"b")
.select($"a", $"b", $"d")
Table rightOuterJoin(Table right, Expression joinPredicate)
Table
s. Similar to a SQL right outer join. The fields of the two joined
operations must not overlap, use as
to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment
and its TableConfig
must have null check enabled (default).
Java Example:
left.rightOuterJoin(right, $("a").isEqual($("b")))
.select($("a"), $("b"), $("d"));
Scala Example:
left.rightOuterJoin(right, $"a" === $"b")
.select($"a", $"b", $"d")
Table fullOuterJoin(Table right, Expression joinPredicate)
Table
s. Similar to a SQL full outer join. The fields of the two joined
operations must not overlap, use as
to rename fields if necessary.
Note: Both tables must be bound to the same TableEnvironment
and its TableConfig
must have null check enabled (default).
Java Example:
left.fullOuterJoin(right, $("a").isEqual($("b")))
.select($("a"), $("b"), $("d"));
Scala Example:
left.fullOuterJoin(right, $"a" === $"b")
.select($"a", $"b", $"d")
Table joinLateral(Expression tableFunctionCall)
Table
with an user-defined TableFunction
. This join is similar to
a SQL inner join with ON TRUE predicate but works with a table function. Each row of the
table is joined with all rows produced by the table function.
Java Example:
class MySplitUDTF extends TableFunction<String> {
public void eval(String str) {
str.split("#").forEach(this::collect);
}
}
table.joinLateral(call(MySplitUDTF.class, $("c")).as("s"))
.select($("a"), $("b"), $("c"), $("s"));
Scala Example:
class MySplitUDTF extends TableFunction[String] {
def eval(str: String): Unit = {
str.split("#").foreach(collect)
}
}
val split = new MySplitUDTF()
table.joinLateral(split($"c") as "s")
.select($"a", $"b", $"c", $"s")
Table joinLateral(Expression tableFunctionCall, Expression joinPredicate)
Table
with an user-defined TableFunction
. This join is similar to
a SQL inner join but works with a table function. Each row of the table is joined with all
rows produced by the table function.
Java Example:
class MySplitUDTF extends TableFunction<String> {
public void eval(String str) {
str.split("#").forEach(this::collect);
}
}
table.joinLateral(call(MySplitUDTF.class, $("c")).as("s"), $("a").isEqual($("s")))
.select($("a"), $("b"), $("c"), $("s"));
Scala Example:
class MySplitUDTF extends TableFunction[String] {
def eval(str: String): Unit = {
str.split("#").foreach(collect)
}
}
val split = new MySplitUDTF()
table.joinLateral(split($"c") as "s", $"a" === $"s")
.select($"a", $"b", $"c", $"s")
Table leftOuterJoinLateral(Expression tableFunctionCall)
Table
with an user-defined TableFunction
. This join is similar to
a SQL left outer join with ON TRUE predicate but works with a table function. Each row of the
table is joined with all rows produced by the table function. If the table function does not
produce any row, the outer row is padded with nulls.
Java Example:
class MySplitUDTF extends TableFunction<String> {
public void eval(String str) {
str.split("#").forEach(this::collect);
}
}
table.leftOuterJoinLateral(call(MySplitUDTF.class, $("c")).as("s"))
.select($("a"), $("b"), $("c"), $("s"));
Scala Example:
class MySplitUDTF extends TableFunction[String] {
def eval(str: String): Unit = {
str.split("#").foreach(collect)
}
}
val split = new MySplitUDTF()
table.leftOuterJoinLateral(split($"c") as "s")
.select($"a", $"b", $"c", $"s")
Table leftOuterJoinLateral(Expression tableFunctionCall, Expression joinPredicate)
Table
with an user-defined TableFunction
. This join is similar to
a SQL left outer join with ON TRUE predicate but works with a table function. Each row of the
table is joined with all rows produced by the table function. If the table function does not
produce any row, the outer row is padded with nulls.
Java Example:
class MySplitUDTF extends TableFunction<String> {
public void eval(String str) {
str.split("#").forEach(this::collect);
}
}
table.leftOuterJoinLateral(call(MySplitUDTF.class, $("c")).as("s"), $("a").isEqual($("s")))
.select($("a"), $("b"), $("c"), $("s"));
Scala Example:
class MySplitUDTF extends TableFunction[String] {
def eval(str: String): Unit = {
str.split("#").foreach(collect)
}
}
val split = new MySplitUDTF()
table.leftOuterJoinLateral(split($"c") as "s", $"a" === $"s")
.select($"a", $"b", $"c", $"s")
Table minus(Table right)
Table
s with duplicate records removed. Similar to a SQL EXCEPT clause.
Minus returns records from the left table that do not exist in the right table. Duplicate
records in the left table are returned exactly once, i.e., duplicates are removed. Both
tables must have identical field types.
Note: Both tables must be bound to the same TableEnvironment
.
Example:
left.minus(right);
Table minusAll(Table right)
Table
s. Similar to a SQL EXCEPT ALL. Similar to a SQL EXCEPT ALL clause.
MinusAll returns the records that do not exist in the right table. A record that is present n
times in the left table and m times in the right table is returned (n - m) times, i.e., as
many duplicates as are present in the right table are removed. Both tables must have
identical field types.
Note: Both tables must be bound to the same TableEnvironment
.
Example:
left.minusAll(right);
Table union(Table right)
Table
s with duplicate records removed. Similar to a SQL UNION. The fields
of the two union operations must fully overlap.
Note: Both tables must be bound to the same TableEnvironment
.
Example:
left.union(right);
Table unionAll(Table right)
Table
s. Similar to a SQL UNION ALL. The fields of the two union operations
must fully overlap.
Note: Both tables must be bound to the same TableEnvironment
.
Example:
left.unionAll(right);
Table intersect(Table right)
Table
s with duplicate records removed. Intersect returns records that
exist in both tables. If a record is present in one or both tables more than once, it is
returned just once, i.e., the resulting table has no duplicate records. Similar to a SQL
INTERSECT. The fields of the two intersect operations must fully overlap.
Note: Both tables must be bound to the same TableEnvironment
.
Example:
left.intersect(right);
Table intersectAll(Table right)
Table
s. IntersectAll returns records that exist in both tables. If a
record is present in both tables more than once, it is returned as many times as it is
present in both tables, i.e., the resulting table might have duplicate records. Similar to an
SQL INTERSECT ALL. The fields of the two intersect operations must fully overlap.
Note: Both tables must be bound to the same TableEnvironment
.
Example:
left.intersectAll(right);
Table orderBy(Expression... fields)
Table
. Similar to SQL ORDER BY
.
The resulting Table is globally sorted across all parallel partitions.
Java Example:
tab.orderBy($("name").desc());
Scala Example:
tab.orderBy($"name".desc)
For unbounded tables, this operation requires a sorting on a time attribute or a subsequent fetch operation.
Table offset(int offset)
This method can be combined with a preceding orderBy(Expression...)
call for a
deterministic order and a subsequent fetch(int)
call to return n rows after skipping
the first o rows.
// skips the first 3 rows and returns all following rows.
tab.orderBy($("name").desc()).offset(3);
// skips the first 10 rows and returns the next 5 rows.
tab.orderBy($("name").desc()).offset(10).fetch(5);
For unbounded tables, this operation requires a subsequent fetch operation.
offset
- number of records to skipTable fetch(int fetch)
This method can be combined with a preceding orderBy(Expression...)
call for a
deterministic order and offset(int)
call to return n rows after skipping the first o
rows.
// returns the first 3 records.
tab.orderBy($("name").desc()).fetch(3);
// skips the first 10 rows and returns the next 5 rows.
tab.orderBy($("name").desc()).offset(10).fetch(5);
fetch
- the number of records to return. Fetch must be >= 0.default Table limit(int fetch)
This method is a synonym for fetch(int)
.
default Table limit(int offset, int fetch)
This method is a synonym for offset(int)
followed by fetch(int)
.
GroupWindowedTable window(GroupWindow groupWindow)
For streaming tables of infinite size, grouping into windows is required to define finite groups on which group-based aggregates can be computed.
For batch tables of finite size, windowing essentially provides shortcuts for time-based groupBy.
Note: Computing windowed aggregates on a streaming table is only a parallel
operation if additional grouping attributes are added to the groupBy(...)
clause. If
the groupBy(...)
only references a GroupWindow alias, the streamed table will be
processed by a single task, i.e., with parallelism 1.
groupWindow
- groupWindow that specifies how elements are grouped.OverWindowedTable window(OverWindow... overWindows)
An over-window defines for each record an interval of records over which aggregation functions can be computed.
Java Example:
table
.window(Over.partitionBy($("c")).orderBy($("rowTime")).preceding(lit(10).seconds()).as("ow")
.select($("c"), $("b").count().over($("ow")), $("e").sum().over($("ow")));
Scala Example:
table
.window(Over partitionBy $"c" orderBy $"rowTime" preceding 10.seconds as "ow")
.select($"c", $"b".count over $"ow", $"e".sum over $"ow")
Note: Computing over window aggregates on a streaming table is only a parallel operation if the window is partitioned. Otherwise, the whole stream will be processed by a single task, i.e., with parallelism 1.
Note: Over-windows for batch tables are currently not supported.
overWindows
- windows that specify the record interval over which aggregations are
computed.Table addColumns(Expression... fields)
Java Example:
tab.addColumns(
$("a").plus(1).as("a1"),
concat($("b"), "sunny").as("b1")
);
Scala Example:
tab.addColumns(
$"a" + 1 as "a1",
concat($"b", "sunny") as "b1"
)
Table addOrReplaceColumns(Expression... fields)
Java Example:
tab.addOrReplaceColumns(
$("a").plus(1).as("a1"),
concat($("b"), "sunny").as("b1")
);
Scala Example:
tab.addOrReplaceColumns(
$"a" + 1 as "a1",
concat($"b", "sunny") as "b1"
)
Table renameColumns(Expression... fields)
Java Example:
tab.renameColumns(
$("a").as("a1"),
$("b").as("b1")
);
Scala Example:
tab.renameColumns(
$"a" as "a1",
$"b" as "b1"
)
Table dropColumns(Expression... fields)
Java Example:
tab.dropColumns($("a"), $("b"));
Scala Example:
tab.dropColumns($"a", $"b")
Table map(Expression mapFunction)
Java Example:
tab.map(call(MyMapFunction.class, $("c")))
Scala Example:
val func = new MyMapFunction()
tab.map(func($"c"))
Table flatMap(Expression tableFunction)
Java Example:
tab.flatMap(call(MyFlatMapFunction.class, $("c")))
Scala Example:
val func = new MyFlatMapFunction()
tab.flatMap(func($"c"))
AggregatedTable aggregate(Expression aggregateFunction)
aggregate(Expression)
with a select statement. The output will be flattened if the
output type is a composite type.
Java Example:
tab.aggregate(call(MyAggregateFunction.class, $("a"), $("b")).as("f0", "f1", "f2"))
.select($("f0"), $("f1"));
Scala Example:
val aggFunc = new MyAggregateFunction
table.aggregate(aggFunc($"a", $"b") as ("f0", "f1", "f2"))
.select($"f0", $"f1")
FlatAggregateTable flatAggregate(Expression tableAggregateFunction)
Java Example:
tab.flatAggregate(call(MyTableAggregateFunction.class, $("a"), $("b")).as("x", "y", "z"))
.select($("x"), $("y"), $("z"));
Scala Example:
val tableAggFunc: TableAggregateFunction = new MyTableAggregateFunction
tab.flatAggregate(tableAggFunc($"a", $"b") as ("x", "y", "z"))
.select($"x", $"y", $"z")
TablePipeline insertInto(String tablePath)
Table
object should be written to a
table (backed by a DynamicTableSink
) that was registered under the specified path.
See the documentation of TableEnvironment.useDatabase(String)
or TableEnvironment.useCatalog(String)
for the rules on the path resolution.
Example:
Table table = tableEnv.sqlQuery("SELECT * FROM MyTable");
TablePipeline tablePipeline = table.insertInto("MySinkTable");
TableResult tableResult = tablePipeline.execute();
tableResult.await();
One can execute the returned TablePipeline
using Executable.execute()
,
or compile it to a CompiledPlan
using Compilable.compilePlan()
.
If multiple pipelines should insert data into one or more sink tables as part of a single
execution, use a StatementSet
(see TableEnvironment.createStatementSet()
).
tablePath
- The path of the registered table (backed by a DynamicTableSink
).TablePipeline insertInto(String tablePath, boolean overwrite)
Table
object should be written to a
table (backed by a DynamicTableSink
) that was registered under the specified path.
See the documentation of TableEnvironment.useDatabase(String)
or TableEnvironment.useCatalog(String)
for the rules on the path resolution.
Example:
Table table = tableEnv.sqlQuery("SELECT * FROM MyTable");
TablePipeline tablePipeline = table.insertInto("MySinkTable", true);
TableResult tableResult = tablePipeline.execute();
tableResult.await();
One can execute the returned TablePipeline
using Executable.execute()
,
or compile it to a CompiledPlan
using Compilable.compilePlan()
.
If multiple pipelines should insert data into one or more sink tables as part of a single
execution, use a StatementSet
(see TableEnvironment.createStatementSet()
).
tablePath
- The path of the registered table (backed by a DynamicTableSink
).overwrite
- Indicates whether existing data should be overwritten.TablePipeline insertInto(TableDescriptor descriptor)
Table
object should be written to a
table (backed by a DynamicTableSink
) expressed via the given TableDescriptor
.
The descriptor
won't be registered in the catalog, but it will be
propagated directly in the operation tree. Note that calling this method multiple times, even
with the same descriptor, results in multiple sink tables instances.
This method allows to declare a Schema
for the sink descriptor. The declaration is
similar to a CREATE TABLE
DDL in SQL and allows to:
DataType
It is possible to declare a schema without physical/regular columns. In this case, those columns will be automatically derived and implicitly put at the beginning of the schema declaration.
Examples:
Schema schema = Schema.newBuilder()
.column("f0", DataTypes.STRING())
.build();
Table table = tableEnv.from(TableDescriptor.forConnector("datagen")
.schema(schema)
.build());
table.insertInto(TableDescriptor.forConnector("blackhole")
.schema(schema)
.build());
One can execute the returned TablePipeline
using Executable.execute()
,
or compile it to a CompiledPlan
using Compilable.compilePlan()
.
If multiple pipelines should insert data into one or more sink tables as part of a single
execution, use a StatementSet
(see TableEnvironment.createStatementSet()
).
descriptor
- Descriptor describing the sink table into which data should be inserted.TablePipeline insertInto(TableDescriptor descriptor, boolean overwrite)
Table
object should be written to a
table (backed by a DynamicTableSink
) expressed via the given TableDescriptor
.
The descriptor
won't be registered in the catalog, but it will be
propagated directly in the operation tree. Note that calling this method multiple times, even
with the same descriptor, results in multiple sink tables being registered.
This method allows to declare a Schema
for the sink descriptor. The declaration is
similar to a CREATE TABLE
DDL in SQL and allows to:
DataType
It is possible to declare a schema without physical/regular columns. In this case, those columns will be automatically derived and implicitly put at the beginning of the schema declaration.
Examples:
Schema schema = Schema.newBuilder()
.column("f0", DataTypes.STRING())
.build();
Table table = tableEnv.from(TableDescriptor.forConnector("datagen")
.schema(schema)
.build());
table.insertInto(TableDescriptor.forConnector("blackhole")
.schema(schema)
.build(), true);
One can execute the returned TablePipeline
using Executable.execute()
,
or compile it to a CompiledPlan
using Compilable.compilePlan()
.
If multiple pipelines should insert data into one or more sink tables as part of a single
execution, use a StatementSet
(see TableEnvironment.createStatementSet()
).
descriptor
- Descriptor describing the sink table into which data should be inserted.overwrite
- Indicates whether existing data should be overwritten.default TableResult executeInsert(String tablePath)
tableEnv.insertInto(tablePath).execute()
.insertInto(String)
,
Executable.execute()
default TableResult executeInsert(String tablePath, boolean overwrite)
tableEnv.insertInto(tablePath, overwrite).execute()
.insertInto(String, boolean)
,
Executable.execute()
default TableResult executeInsert(TableDescriptor descriptor)
tableEnv.insertInto(descriptor).execute()
.insertInto(TableDescriptor)
,
Executable.execute()
default TableResult executeInsert(TableDescriptor descriptor, boolean overwrite)
tableEnv.insertInto(descriptor, overwrite).execute()
.Copyright © 2014–2024 The Apache Software Foundation. All rights reserved.