This documentation is for an out-of-date version of Apache Flink. We recommend you use the latest stable version.
This document gives a deep-dive into the available transformations on DataSets. For a general introduction to the
Flink Java API, please refer to the Programming Guide.
For zipping elements in a data set with a dense index, please refer to the Zip Elements Guide.
The Map transformation applies a user-defined map function on each element of a DataSet.
It implements a one-to-one mapping, that is, exactly one element must be returned by
the function.
The following code transforms a DataSet of Integer pairs into a DataSet of Integers:
FlatMap
The FlatMap transformation applies a user-defined flat-map function on each element of a DataSet.
This variant of a map function can return arbitrary many result elements (including none) for each input element.
The following code transforms a DataSet of text lines into a DataSet of words:
MapPartition
MapPartition transforms a parallel partition in a single function call. The map-partition function
gets the partition as Iterable and can produce an arbitrary number of result values. The number of elements in each partition depends on the degree-of-parallelism
and previous operations.
The following code transforms a DataSet of text lines into a DataSet of counts per partition:
Filter
The Filter transformation applies a user-defined filter function on each element of a DataSet and retains only those elements for which the function returns true.
The following code removes all Integers smaller than zero from a DataSet:
IMPORTANT: The system assumes that the function does not modify the elements on which the predicate is applied. Violating this assumption
can lead to incorrect results.
Projection of Tuple DataSet
The Project transformation removes or moves Tuple fields of a Tuple DataSet.
The project(int...) method selects Tuple fields that should be retained by their index and defines their order in the output Tuple.
Projections do not require the definition of a user function.
The following code shows different ways to apply a Project transformation on a DataSet:
Projection with Type Hint
Note that the Java compiler cannot infer the return type of project operator. This can cause a problem if you call another operator on a result of project operator such as:
This problem can be overcome by hinting the return type of project operator like this:
Transformations on Grouped DataSet
The reduce operations can operate on grouped data sets. Specifying the key to
be used for grouping can be done in many ways:
key expressions
a key-selector function
one or more field position keys (Tuple DataSet only)
Case Class fields (Case Classes only)
Please look at the reduce examples to see how the grouping keys are specified.
Reduce on Grouped DataSet
A Reduce transformation that is applied on a grouped DataSet reduces each group to a single
element using a user-defined reduce function.
For each group of input elements, a reduce function successively combines pairs of elements into one
element until only a single element for each group remains.
Note that for a ReduceFunction the keyed fields of the returned object should match the input
values. This is because reduce is implicitly combinable and objects emitted from the combine
operator are again grouped by key when passed to the reduce operator.
Reduce on DataSet Grouped by Key Expression
Key expressions specify one or more fields of each element of a DataSet. Each key expression is
either the name of a public field or a getter method. A dot can be used to drill down into objects.
The key expression “*” selects all fields.
The following code shows how to group a POJO DataSet using key expressions and to reduce it
with a reduce function.
Reduce on DataSet Grouped by KeySelector Function
A key-selector function extracts a key value from each element of a DataSet. The extracted key
value is used to group the DataSet.
The following code shows how to group a POJO DataSet using a key-selector function and to reduce it
with a reduce function.
Reduce on DataSet Grouped by Field Position Keys (Tuple DataSets only)
Field position keys specify one or more fields of a Tuple DataSet that are used as grouping keys.
The following code shows how to use field position keys and apply a reduce function
Reduce on DataSet grouped by Case Class Fields
When using Case Classes you can also specify the grouping key using the names of the fields:
GroupReduce on Grouped DataSet
A GroupReduce transformation that is applied on a grouped DataSet calls a user-defined
group-reduce function for each group. The difference
between this and Reduce is that the user defined function gets the whole group at once.
The function is invoked with an Iterable over all elements of a group and can return an arbitrary
number of result elements.
GroupReduce on DataSet Grouped by Field Position Keys (Tuple DataSets only)
The following code shows how duplicate strings can be removed from a DataSet grouped by Integer.
GroupReduce on DataSet Grouped by Key Expression, KeySelector Function, or Case Class Fields
A group-reduce function accesses the elements of a group using an Iterable. Optionally, the Iterable can hand out the elements of a group in a specified order. In many cases this can help to reduce the complexity of a user-defined
group-reduce function and improve its efficiency.
The following code shows another example how to remove duplicate Strings in a DataSet grouped by an Integer and sorted by String.
Note: A GroupSort often comes for free if the grouping is established using a sort-based execution strategy of an operator before the reduce operation.
Combinable GroupReduceFunctions
In contrast to a reduce function, a group-reduce function is not
implicitly combinable. In order to make a group-reduce function
combinable it must implement the GroupCombineFunction interface.
Important: The generic input and output types of
the GroupCombineFunction interface must be equal to the generic input type
of the GroupReduceFunction as shown in the following example:
GroupCombine on a Grouped DataSet
The GroupCombine transformation is the generalized form of the combine step in
the combinable GroupReduceFunction. It is generalized in the sense that it
allows combining of input type I to an arbitrary output type O. In contrast,
the combine step in the GroupReduce only allows combining from input type I to
output type I. This is because the reduce step in the GroupReduceFunction
expects input type I.
In some applications, it is desirable to combine a DataSet into an intermediate
format before performing additional transformations (e.g. to reduce data
size). This can be achieved with a CombineGroup transformation with very little
costs.
Note: The GroupCombine on a Grouped DataSet is performed in memory with a
greedy strategy which may not process all data at once but in multiple
steps. It is also performed on the individual partitions without a data
exchange like in a GroupReduce transformation. This may lead to partial
results.
The following example demonstrates the use of a CombineGroup transformation for
an alternative WordCount implementation.
The above alternative WordCount implementation demonstrates how the GroupCombine
combines words before performing the GroupReduce transformation. The above
example is just a proof of concept. Note, how the combine step changes the type
of the DataSet which would normally require an additional Map transformation
before executing the GroupReduce.
Aggregate on Grouped Tuple DataSet
There are some common aggregation operations that are frequently used. The Aggregate transformation provides the following build-in aggregation functions:
Sum,
Min, and
Max.
The Aggregate transformation can only be applied on a Tuple DataSet and supports only field position keys for grouping.
The following code shows how to apply an Aggregation transformation on a DataSet grouped by field position keys:
To apply multiple aggregations on a DataSet it is necessary to use the .and() function after the first aggregate, that means .aggregate(SUM, 0).and(MIN, 2) produces the sum of field 0 and the minimum of field 2 of the original DataSet.
In contrast to that .aggregate(SUM, 0).aggregate(MIN, 2) will apply an aggregation on an aggregation. In the given example it would produce the minimum of field 2 after calculating the sum of field 0 grouped by field 1.
Note: The set of aggregation functions will be extended in the future.
MinBy / MaxBy on Grouped Tuple DataSet
The MinBy (MaxBy) transformation selects a single tuple for each group of tuples. The selected tuple is the tuple whose values of one or more specified fields are minimum (maximum). The fields which are used for comparison must be valid key fields, i.e., comparable. If multiple tuples have minimum (maximum) fields values, an arbitrary tuple of these tuples is returned.
The following code shows how to select the tuple with the minimum values for the Integer and Double fields for each group of tuples with the same String value from a DataSet<Tuple3<Integer, String, Double>>:
Reduce on full DataSet
The Reduce transformation applies a user-defined reduce function to all elements of a DataSet.
The reduce function subsequently combines pairs of elements into one element until only a single element remains.
The following code shows how to sum all elements of an Integer DataSet:
Reducing a full DataSet using the Reduce transformation implies that the final Reduce operation cannot be done in parallel. However, a reduce function is automatically combinable such that a Reduce transformation does not limit scalability for most use cases.
GroupReduce on full DataSet
The GroupReduce transformation applies a user-defined group-reduce function on all elements of a DataSet.
A group-reduce can iterate over all elements of DataSet and return an arbitrary number of result elements.
The following example shows how to apply a GroupReduce transformation on a full DataSet:
Note: A GroupReduce transformation on a full DataSet cannot be done in parallel if the
group-reduce function is not combinable. Therefore, this can be a very compute intensive operation.
See the paragraph on “Combinable GroupReduceFunctions” above to learn how to implement a
combinable group-reduce function.
GroupCombine on a full DataSet
The GroupCombine on a full DataSet works similar to the GroupCombine on a
grouped DataSet. The data is partitioned on all nodes and then combined in a
greedy fashion (i.e. only data fitting into memory is combined at once).
Aggregate on full Tuple DataSet
There are some common aggregation operations that are frequently used. The Aggregate transformation
provides the following build-in aggregation functions:
Sum,
Min, and
Max.
The Aggregate transformation can only be applied on a Tuple DataSet.
The following code shows how to apply an Aggregation transformation on a full DataSet:
Note: Extending the set of supported aggregation functions is on our roadmap.
MinBy / MaxBy on full Tuple DataSet
The MinBy (MaxBy) transformation selects a single tuple from a DataSet of tuples. The selected tuple is the tuple whose values of one or more specified fields are minimum (maximum). The fields which are used for comparison must be valid key fields, i.e., comparable. If multiple tuples have minimum (maximum) fields values, an arbitrary tuple of these tuples is returned.
The following code shows how to select the tuple with the maximum values for the Integer and Double fields from a DataSet<Tuple3<Integer, String, Double>>:
Distinct
The Distinct transformation computes the DataSet of the distinct elements of the source DataSet.
The following code removes all duplicate elements from the DataSet:
It is also possible to change how the distinction of the elements in the DataSet is decided, using:
one or more field position keys (Tuple DataSets only),
a key-selector function, or
a key expression.
Distinct with field position keys
Distinct with KeySelector function
Distinct with key expression
It is also possible to indicate to use all the fields by the wildcard character:
Join
The Join transformation joins two DataSets into one DataSet. The elements of both DataSets are joined on one or more keys which can be specified using
a key expression
a key-selector function
one or more field position keys (Tuple DataSet only).
Case Class Fields
There are a few different ways to perform a Join transformation which are shown in the following.
Default Join (Join into Tuple2)
The default Join transformation produces a new Tuple DataSet with two fields. Each tuple holds a joined element of the first input DataSet in the first tuple field and a matching element of the second input DataSet in the second field.
The following code shows a default Join transformation using field position keys:
Join with Join Function
A Join transformation can also call a user-defined join function to process joining tuples.
A join function receives one element of the first input DataSet and one element of the second input DataSet and returns exactly one element.
The following code performs a join of DataSet with custom java objects and a Tuple DataSet using key-selector functions and shows how to use a user-defined join function:
Join with Flat-Join Function
Analogous to Map and FlatMap, a FlatJoin behaves in the same
way as a Join, but instead of returning one element, it can
return (collect), zero, one, or more elements.
Not supported.
Join with Projection (Java/Python Only)
A Join transformation can construct result tuples using a projection as shown here:
projectFirst(int...) and projectSecond(int...) select the fields of the first and second joined input that should be assembled into an output Tuple. The order of indexes defines the order of fields in the output tuple.
The join projection works also for non-Tuple DataSets. In this case, projectFirst() or projectSecond() must be called without arguments to add a joined element to the output Tuple.
project_first(int...) and project_second(int...) select the fields of the first and second joined input that should be assembled into an output Tuple. The order of indexes defines the order of fields in the output tuple.
The join projection works also for non-Tuple DataSets. In this case, project_first() or project_second() must be called without arguments to add a joined element to the output Tuple.
Join with DataSet Size Hint
In order to guide the optimizer to pick the right execution strategy, you can hint the size of a DataSet to join as shown here:
Join Algorithm Hints
The Flink runtime can execute joins in various ways. Each possible way outperforms the others under
different circumstances. The system tries to pick a reasonable way automatically, but allows you
to manually pick a strategy, in case you want to enforce a specific way of executing the join.
The following hints are available:
OPTIMIZER_CHOOSES: Equivalent to not giving a hint at all, leaves the choice to the system.
BROADCAST_HASH_FIRST: Broadcasts the first input and builds a hash table from it, which is
probed by the second input. A good strategy if the first input is very small.
BROADCAST_HASH_SECOND: Broadcasts the second input and builds a hash table from it, which is
probed by the first input. A good strategy if the second input is very small.
REPARTITION_HASH_FIRST: The system partitions (shuffles) each input (unless the input is already
partitioned) and builds a hash table from the first input. This strategy is good if the first
input is smaller than the second, but both inputs are still large.
Note: This is the default fallback strategy that the system uses if no size estimates can be made
and no pre-existing partitions and sort-orders can be re-used.
REPARTITION_HASH_SECOND: The system partitions (shuffles) each input (unless the input is already
partitioned) and builds a hash table from the second input. This strategy is good if the second
input is smaller than the first, but both inputs are still large.
REPARTITION_SORT_MERGE: The system partitions (shuffles) each input (unless the input is already
partitioned) and sorts each input (unless it is already sorted). The inputs are joined by
a streamed merge of the sorted inputs. This strategy is good if one or both of the inputs are
already sorted.
OuterJoin
The OuterJoin transformation performs a left, right, or full outer join on two data sets. Outer joins are similar to regular (inner) joins and create all pairs of elements that are equal on their keys. In addition, records of the “outer” side (left, right, or both in case of full) are preserved if no matching key is found in the other side. Matching pair of elements (or one element and a null value for the other input) are given to a JoinFunction to turn the pair of elements into a single element, or to a FlatJoinFunction to turn the pair of elements into arbitrarily many (including none) elements.
The elements of both DataSets are joined on one or more keys which can be specified using
a key expression
a key-selector function
one or more field position keys (Tuple DataSet only).
Case Class Fields
OuterJoins are only supported for the Java and Scala DataSet API.
OuterJoin with Join Function
A OuterJoin transformation calls a user-defined join function to process joining tuples.
A join function receives one element of the first input DataSet and one element of the second input DataSet and returns exactly one element. Depending on the type of the outer join (left, right, full) one of both input elements of the join function can be null.
The following code performs a left outer join of DataSet with custom java objects and a Tuple DataSet using key-selector functions and shows how to use a user-defined join function:
OuterJoin with Flat-Join Function
Analogous to Map and FlatMap, an OuterJoin with flat-join function behaves in the same
way as an OuterJoin with join function, but instead of returning one element, it can
return (collect), zero, one, or more elements.
Join Algorithm Hints
The Flink runtime can execute outer joins in various ways. Each possible way outperforms the others under
different circumstances. The system tries to pick a reasonable way automatically, but allows you
to manually pick a strategy, in case you want to enforce a specific way of executing the outer join.
The following hints are available.
OPTIMIZER_CHOOSES: Equivalent to not giving a hint at all, leaves the choice to the system.
BROADCAST_HASH_FIRST: Broadcasts the first input and builds a hash table from it, which is
probed by the second input. A good strategy if the first input is very small.
BROADCAST_HASH_SECOND: Broadcasts the second input and builds a hash table from it, which is
probed by the first input. A good strategy if the second input is very small.
REPARTITION_HASH_FIRST: The system partitions (shuffles) each input (unless the input is already
partitioned) and builds a hash table from the first input. This strategy is good if the first
input is smaller than the second, but both inputs are still large.
REPARTITION_HASH_SECOND: The system partitions (shuffles) each input (unless the input is already
partitioned) and builds a hash table from the second input. This strategy is good if the second
input is smaller than the first, but both inputs are still large.
REPARTITION_SORT_MERGE: The system partitions (shuffles) each input (unless the input is already
partitioned) and sorts each input (unless it is already sorted). The inputs are joined by
a streamed merge of the sorted inputs. This strategy is good if one or both of the inputs are
already sorted.
NOTE: Not all execution strategies are supported by every outer join type, yet.
LeftOuterJoin supports:
OPTIMIZER_CHOOSES
BROADCAST_HASH_SECOND
REPARTITION_HASH_SECOND
REPARTITION_SORT_MERGE
RightOuterJoin supports:
OPTIMIZER_CHOOSES
BROADCAST_HASH_FIRST
REPARTITION_HASH_FIRST
REPARTITION_SORT_MERGE
FullOuterJoin supports:
OPTIMIZER_CHOOSES
REPARTITION_SORT_MERGE
Cross
The Cross transformation combines two DataSets into one DataSet. It builds all pairwise combinations of the elements of both input DataSets, i.e., it builds a Cartesian product.
The Cross transformation either calls a user-defined cross function on each pair of elements or outputs a Tuple2. Both modes are shown in the following.
Note: Cross is potentially a very compute-intensive operation which can challenge even large compute clusters!
Cross with User-Defined Function
A Cross transformation can call a user-defined cross function. A cross function receives one element of the first input and one element of the second input and returns exactly one result element.
The following code shows how to apply a Cross transformation on two DataSets using a cross function:
Cross with Projection
A Cross transformation can also construct result tuples using a projection as shown here:
The field selection in a Cross projection works the same way as in the projection of Join results.
Cross with Projection
A Cross transformation can also construct result tuples using a projection as shown here:
The field selection in a Cross projection works the same way as in the projection of Join results.
Cross with DataSet Size Hint
In order to guide the optimizer to pick the right execution strategy, you can hint the size of a DataSet to cross as shown here:
CoGroup
The CoGroup transformation jointly processes groups of two DataSets. Both DataSets are grouped on a defined key and groups of both DataSets that share the same key are handed together to a user-defined co-group function. If for a specific key only one DataSet has a group, the co-group function is called with this group and an empty group.
A co-group function can separately iterate over the elements of both groups and return an arbitrary number of result elements.
Similar to Reduce, GroupReduce, and Join, keys can be defined using the different key-selection methods.
CoGroup on DataSets
The example shows how to group by Field Position Keys (Tuple DataSets only). You can do the same with Pojo-types and key expressions.
Union
Produces the union of two DataSets, which have to be of the same type. A union of more than two DataSets can be implemented with multiple union calls, as shown here:
Rebalance
Evenly rebalances the parallel partitions of a DataSet to eliminate data skew.
Hash-Partition
Hash-partitions a DataSet on a given key.
Keys can be specified as position keys, expression keys, and key selector functions (see Reduce examples for how to specify keys).
Range-Partition
Range-partitions a DataSet on a given key.
Keys can be specified as position keys, expression keys, and key selector functions (see Reduce examples for how to specify keys).
Sort Partition
Locally sorts all partitions of a DataSet on a specified field in a specified order.
Fields can be specified as field expressions or field positions (see Reduce examples for how to specify keys).
Partitions can be sorted on multiple fields by chaining sortPartition() calls.
First-n
Returns the first n (arbitrary) elements of a DataSet. First-n can be applied on a regular DataSet, a grouped DataSet, or a grouped-sorted DataSet. Grouping keys can be specified as key-selector functions or field position keys (see Reduce examples for how to specify keys).