public class KeyedStream<T,K> extends DataStream<T>
Constructor and Description |
---|
KeyedStream(KeyedStream<T,K> javaStream) |
Modifier and Type | Method and Description |
---|---|
QueryableStateStream<K,T> |
asQueryableState(String queryableStateName)
Publishes the keyed stream as a queryable ValueState instance.
|
<ACC> QueryableStateStream<K,ACC> |
asQueryableState(String queryableStateName,
FoldingStateDescriptor<T,ACC> stateDescriptor)
Deprecated.
will be removed in a future version. Since .
|
QueryableStateStream<K,T> |
asQueryableState(String queryableStateName,
ReducingStateDescriptor<T> stateDescriptor)
Publishes the keyed stream as a queryable ReducingState instance.
|
QueryableStateStream<K,T> |
asQueryableState(String queryableStateName,
ValueStateDescriptor<T> stateDescriptor)
Publishes the keyed stream as a queryable ValueState instance.
|
WindowedStream<T,K,GlobalWindow> |
countWindow(long size)
Windows this
KeyedStream into tumbling count windows. |
WindowedStream<T,K,GlobalWindow> |
countWindow(long size,
long slide)
Windows this
KeyedStream into sliding count windows. |
<S> DataStream<T> |
filterWithState(scala.Function2<T,scala.Option<S>,scala.Tuple2<Object,scala.Option<S>>> fun,
TypeInformation<S> evidence$4)
Creates a new DataStream that contains only the elements satisfying the given stateful filter
predicate.
|
<R,S> DataStream<R> |
flatMapWithState(scala.Function2<T,scala.Option<S>,scala.Tuple2<scala.collection.TraversableOnce<R>,scala.Option<S>>> fun,
TypeInformation<R> evidence$7,
TypeInformation<S> evidence$8)
Creates a new DataStream by applying the given stateful function to every element and
flattening the results.
|
<R> DataStream<R> |
fold(R initialValue,
FoldFunction<T,R> folder,
TypeInformation<R> evidence$2)
Deprecated.
will be removed in a future version. Since .
|
<R> DataStream<R> |
fold(R initialValue,
scala.Function2<R,T,R> fun,
TypeInformation<R> evidence$3)
Deprecated.
will be removed in a future version. Since .
|
TypeInformation<K> |
getKeyType()
Gets the type of the key by which this stream is keyed.
|
<R,S> DataStream<R> |
mapWithState(scala.Function2<T,scala.Option<S>,scala.Tuple2<R,scala.Option<S>>> fun,
TypeInformation<R> evidence$5,
TypeInformation<S> evidence$6)
Creates a new DataStream by applying the given stateful function to every element of this
DataStream.
|
DataStream<T> |
max(int position)
Applies an aggregation that that gives the current maximum of the data stream at
the given position by the given key.
|
DataStream<T> |
max(String field)
Applies an aggregation that that gives the current maximum of the data stream at
the given field by the given key.
|
DataStream<T> |
maxBy(int position)
Applies an aggregation that that gives the current maximum element of the data stream by
the given position by the given key.
|
DataStream<T> |
maxBy(String field)
Applies an aggregation that that gives the current maximum element of the data stream by
the given field by the given key.
|
DataStream<T> |
min(int position)
Applies an aggregation that that gives the current minimum of the data stream at
the given position by the given key.
|
DataStream<T> |
min(String field)
Applies an aggregation that that gives the current minimum of the data stream at
the given field by the given key.
|
DataStream<T> |
minBy(int position)
Applies an aggregation that that gives the current minimum element of the data stream by
the given position by the given key.
|
DataStream<T> |
minBy(String field)
Applies an aggregation that that gives the current minimum element of the data stream by
the given field by the given key.
|
<R> DataStream<R> |
process(ProcessFunction<T,R> processFunction,
TypeInformation<R> evidence$1)
Applies the given
ProcessFunction on the input stream, thereby
creating a transformed output stream. |
DataStream<T> |
reduce(scala.Function2<T,T,T> fun)
Creates a new
DataStream by reducing the elements of this DataStream
using an associative reduce function. |
DataStream<T> |
reduce(ReduceFunction<T> reducer)
Creates a new
DataStream by reducing the elements of this DataStream
using an associative reduce function. |
DataStream<T> |
sum(int position)
Applies an aggregation that sums the data stream at the given position by the given
key.
|
DataStream<T> |
sum(String field)
Applies an aggregation that sums the data stream at the given field by the given
key.
|
WindowedStream<T,K,TimeWindow> |
timeWindow(Time size)
Windows this
KeyedStream into tumbling time windows. |
WindowedStream<T,K,TimeWindow> |
timeWindow(Time size,
Time slide)
Windows this
KeyedStream into sliding time windows. |
<W extends Window> |
window(WindowAssigner<? super T,W> assigner)
Windows this data stream to a
WindowedStream , which evaluates windows
over a key grouped stream. |
addSink, addSink, assignAscendingTimestamps, assignTimestamps, assignTimestampsAndWatermarks, assignTimestampsAndWatermarks, broadcast, clean, coGroup, connect, countWindowAll, countWindowAll, dataType, disableChaining, executionConfig, executionEnvironment, filter, filter, flatMap, flatMap, flatMap, forward, getExecutionConfig, getExecutionEnvironment, getId, getName, getParallelism, getSideOutput, getType, global, iterate, iterate, javaStream, join, keyBy, keyBy, keyBy, map, map, minResources, name, name, parallelism, partitionCustom, partitionCustom, partitionCustom, preferredResources, print, printToErr, rebalance, rescale, setBufferTimeout, setMaxParallelism, setParallelism, setUidHash, shuffle, slotSharingGroup, split, split, startNewChain, timeWindowAll, timeWindowAll, transform, uid, union, windowAll, writeAsCsv, writeAsCsv, writeAsCsv, writeAsText, writeAsText, writeToSocket, writeUsingOutputFormat
public KeyedStream(KeyedStream<T,K> javaStream)
public TypeInformation<K> getKeyType()
public <R> DataStream<R> process(ProcessFunction<T,R> processFunction, TypeInformation<R> evidence$1)
ProcessFunction
on the input stream, thereby
creating a transformed output stream.
The function will be called for every element in the stream and can produce zero or more output. The function can also query the time and set timers. When reacting to the firing of set timers the function can emit yet more elements.
The function will be called for every element in the input streams and can produce zero
or more output elements. Contrary to the DataStream#flatMap(FlatMapFunction)
function, this function can also query the time and set timers. When reacting to the firing
of set timers the function can directly emit elements and/or register yet more timers.
process
in class DataStream<T>
processFunction
- The ProcessFunction
that is called for each element
in the stream.evidence$1
- (undocumented)public WindowedStream<T,K,TimeWindow> timeWindow(Time size)
KeyedStream
into tumbling time windows.
This is a shortcut for either .window(TumblingEventTimeWindows.of(size))
or
.window(TumblingProcessingTimeWindows.of(size))
depending on the time characteristic
set using
StreamExecutionEnvironment.setStreamTimeCharacteristic()
size
- The size of the window.public WindowedStream<T,K,GlobalWindow> countWindow(long size, long slide)
KeyedStream
into sliding count windows.
size
- The size of the windows in number of elements.slide
- The slide interval in number of elements.public WindowedStream<T,K,GlobalWindow> countWindow(long size)
KeyedStream
into tumbling count windows.
size
- The size of the windows in number of elements.public WindowedStream<T,K,TimeWindow> timeWindow(Time size, Time slide)
KeyedStream
into sliding time windows.
This is a shortcut for either .window(SlidingEventTimeWindows.of(size))
or
.window(SlidingProcessingTimeWindows.of(size))
depending on the time characteristic
set using
StreamExecutionEnvironment.setStreamTimeCharacteristic()
size
- The size of the window.slide
- (undocumented)public <W extends Window> WindowedStream<T,K,W> window(WindowAssigner<? super T,W> assigner)
WindowedStream
, which evaluates windows
over a key grouped stream. Elements are put into windows by a WindowAssigner
. The
grouping of elements is done both by key and by window.
A Trigger
can be defined to specify
when windows are evaluated. However, WindowAssigner
have a default Trigger
that is used if a Trigger
is not specified.
assigner
- The WindowAssigner
that assigns elements to windows.public DataStream<T> reduce(ReduceFunction<T> reducer)
DataStream
by reducing the elements of this DataStream
using an associative reduce function. An independent aggregate is kept per key.reducer
- (undocumented)public DataStream<T> reduce(scala.Function2<T,T,T> fun)
DataStream
by reducing the elements of this DataStream
using an associative reduce function. An independent aggregate is kept per key.fun
- (undocumented)public <R> DataStream<R> fold(R initialValue, FoldFunction<T,R> folder, TypeInformation<R> evidence$2)
DataStream
by folding the elements of this DataStream
using an associative fold function and an initial value. An independent
aggregate is kept per key.initialValue
- (undocumented)folder
- (undocumented)evidence$2
- (undocumented)public <R> DataStream<R> fold(R initialValue, scala.Function2<R,T,R> fun, TypeInformation<R> evidence$3)
DataStream
by folding the elements of this DataStream
using an associative fold function and an initial value. An independent
aggregate is kept per key.initialValue
- (undocumented)fun
- (undocumented)evidence$3
- (undocumented)public DataStream<T> max(int position)
position
- The field position in the data points to minimize. This is applicable to
Tuple types, Scala case classes, and primitive types (which is considered
as having one field).public DataStream<T> max(String field)
field
- In case of a POJO, Scala case class, or Tuple type, the
name of the (public) field on which to perform the aggregation.
Additionally, a dot can be used to drill down into nested
objects, as in "field1.fieldxy"
.
Furthermore "*" can be specified in case of a basic type
(which is considered as having only one field).public DataStream<T> min(int position)
position
- The field position in the data points to minimize. This is applicable to
Tuple types, Scala case classes, and primitive types (which is considered
as having one field).public DataStream<T> min(String field)
field
- In case of a POJO, Scala case class, or Tuple type, the
name of the (public) field on which to perform the aggregation.
Additionally, a dot can be used to drill down into nested
objects, as in "field1.fieldxy"
.
Furthermore "*" can be specified in case of a basic type
(which is considered as having only one field).public DataStream<T> sum(int position)
position
- The field position in the data points to minimize. This is applicable to
Tuple types, Scala case classes, and primitive types (which is considered
as having one field).public DataStream<T> sum(String field)
field
- In case of a POJO, Scala case class, or Tuple type, the
name of the (public) field on which to perform the aggregation.
Additionally, a dot can be used to drill down into nested
objects, as in "field1.fieldxy"
.
Furthermore "*" can be specified in case of a basic type
(which is considered as having only one field).public DataStream<T> minBy(int position)
position
- The field position in the data points to minimize. This is applicable to
Tuple types, Scala case classes, and primitive types (which is considered
as having one field).public DataStream<T> minBy(String field)
field
- In case of a POJO, Scala case class, or Tuple type, the
name of the (public) field on which to perform the aggregation.
Additionally, a dot can be used to drill down into nested
objects, as in "field1.fieldxy"
.
Furthermore "*" can be specified in case of a basic type
(which is considered as having only one field).public DataStream<T> maxBy(int position)
position
- The field position in the data points to minimize. This is applicable to
Tuple types, Scala case classes, and primitive types (which is considered
as having one field).public DataStream<T> maxBy(String field)
field
- In case of a POJO, Scala case class, or Tuple type, the
name of the (public) field on which to perform the aggregation.
Additionally, a dot can be used to drill down into nested
objects, as in "field1.fieldxy"
.
Furthermore "*" can be specified in case of a basic type
(which is considered as having only one field).public <S> DataStream<T> filterWithState(scala.Function2<T,scala.Option<S>,scala.Tuple2<Object,scala.Option<S>>> fun, TypeInformation<S> evidence$4)
Note that the user state object needs to be serializable.
fun
- (undocumented)evidence$4
- (undocumented)public <R,S> DataStream<R> mapWithState(scala.Function2<T,scala.Option<S>,scala.Tuple2<R,scala.Option<S>>> fun, TypeInformation<R> evidence$5, TypeInformation<S> evidence$6)
Note that the user state object needs to be serializable.
fun
- (undocumented)evidence$5
- (undocumented)evidence$6
- (undocumented)public <R,S> DataStream<R> flatMapWithState(scala.Function2<T,scala.Option<S>,scala.Tuple2<scala.collection.TraversableOnce<R>,scala.Option<S>>> fun, TypeInformation<R> evidence$7, TypeInformation<S> evidence$8)
Note that the user state object needs to be serializable.
fun
- (undocumented)evidence$7
- (undocumented)evidence$8
- (undocumented)public QueryableStateStream<K,T> asQueryableState(String queryableStateName)
queryableStateName
- Name under which to the publish the queryable state instancepublic QueryableStateStream<K,T> asQueryableState(String queryableStateName, ValueStateDescriptor<T> stateDescriptor)
queryableStateName
- Name under which to the publish the queryable state instancestateDescriptor
- State descriptor to create state instance frompublic <ACC> QueryableStateStream<K,ACC> asQueryableState(String queryableStateName, FoldingStateDescriptor<T,ACC> stateDescriptor)
queryableStateName
- Name under which to the publish the queryable state instancestateDescriptor
- State descriptor to create state instance frompublic QueryableStateStream<K,T> asQueryableState(String queryableStateName, ReducingStateDescriptor<T> stateDescriptor)
queryableStateName
- Name under which to the publish the queryable state instancestateDescriptor
- State descriptor to create state instance fromCopyright © 2014–2018 The Apache Software Foundation. All rights reserved.