Modifier and Type | Method and Description |
---|---|
static <T> PatternStream<T> |
CEP.pattern(DataStream<T> input,
Pattern<T,?> pattern)
Creates a
PatternStream from an input data stream and a pattern. |
static <T> PatternStream<T> |
CEP.pattern(DataStream<T> input,
Pattern<T,?> pattern,
EventComparator<T> comparator)
Creates a
PatternStream from an input data stream and a pattern. |
Modifier and Type | Method and Description |
---|---|
DataStream<Row> |
JdbcTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
Modifier and Type | Method and Description |
---|---|
DataStreamSink<?> |
JdbcUpsertTableSink.consumeDataStream(DataStream<Tuple2<Boolean,Row>> dataStream) |
Modifier and Type | Method and Description |
---|---|
protected DataStream<RowData> |
HiveTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
Modifier and Type | Method and Description |
---|---|
DataStream<Row> |
StreamSQLTestProgram.GeneratorTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
Modifier and Type | Class and Description |
---|---|
class |
DataStreamSource<T>
The DataStreamSource represents the starting point of a DataStream.
|
class |
IterativeStream<T>
The iterative data stream represents the start of an iteration in a
DataStream . |
class |
KeyedStream<T,KEY>
A
KeyedStream represents a DataStream on which operator state is partitioned by
key using a provided KeySelector . |
class |
SingleOutputStreamOperator<T>
SingleOutputStreamOperator represents a user defined transformation applied on a DataStream with one predefined output type. |
Modifier and Type | Field and Description |
---|---|
protected DataStream<IN1> |
ConnectedStreams.inputStream1 |
protected DataStream<IN2> |
ConnectedStreams.inputStream2 |
Modifier and Type | Method and Description |
---|---|
<T> DataStream<T> |
CoGroupedStreams.WithWindow.apply(CoGroupFunction<T1,T2,T> function)
Completes the co-group operation with the user function that is executed for windowed
groups.
|
<T> DataStream<T> |
CoGroupedStreams.WithWindow.apply(CoGroupFunction<T1,T2,T> function,
TypeInformation<T> resultType)
Completes the co-group operation with the user function that is executed for windowed
groups.
|
<T> DataStream<T> |
JoinedStreams.WithWindow.apply(FlatJoinFunction<T1,T2,T> function)
Completes the join operation with the user function that is executed for each combination
of elements with the same key in a window.
|
<T> DataStream<T> |
JoinedStreams.WithWindow.apply(FlatJoinFunction<T1,T2,T> function,
TypeInformation<T> resultType)
Completes the join operation with the user function that is executed for each combination
of elements with the same key in a window.
|
<T> DataStream<T> |
JoinedStreams.WithWindow.apply(JoinFunction<T1,T2,T> function)
Completes the join operation with the user function that is executed for each combination
of elements with the same key in a window.
|
<T> DataStream<T> |
JoinedStreams.WithWindow.apply(JoinFunction<T1,T2,T> function,
TypeInformation<T> resultType)
Completes the join operation with the user function that is executed for each combination
of elements with the same key in a window.
|
DataStream<T> |
DataStream.broadcast()
Sets the partitioning of the
DataStream so that the output elements are broadcasted
to every parallel instance of the next operation. |
DataStream<F> |
IterativeStream.ConnectedIterativeStreams.closeWith(DataStream<F> feedbackStream)
Closes the iteration.
|
DataStream<T> |
IterativeStream.closeWith(DataStream<T> feedbackStream)
Closes the iteration.
|
DataStream<T> |
DataStream.forward()
Sets the partitioning of the
DataStream so that the output elements are forwarded to
the local subtask of the next operation. |
DataStream<IN1> |
BroadcastConnectedStream.getFirstInput()
Returns the non-broadcast
DataStream . |
DataStream<IN1> |
ConnectedStreams.getFirstInput()
Returns the first
DataStream . |
DataStream<IN2> |
ConnectedStreams.getSecondInput()
Returns the second
DataStream . |
<X> DataStream<X> |
SingleOutputStreamOperator.getSideOutput(OutputTag<X> sideOutputTag)
Gets the
DataStream that contains the elements that are emitted from an operation
into the side output with the given OutputTag . |
DataStream<T> |
DataStream.global()
Sets the partitioning of the
DataStream so that the output values all go to the first
instance of the next processing operator. |
<K> DataStream<T> |
DataStream.partitionCustom(Partitioner<K> partitioner,
int field)
Deprecated.
|
<K> DataStream<T> |
DataStream.partitionCustom(Partitioner<K> partitioner,
KeySelector<T,K> keySelector)
Partitions a DataStream on the key returned by the selector, using a custom partitioner.
|
<K> DataStream<T> |
DataStream.partitionCustom(Partitioner<K> partitioner,
String field)
Deprecated.
|
DataStream<T> |
DataStream.rebalance()
Sets the partitioning of the
DataStream so that the output elements are distributed
evenly to instances of the next operation in a round-robin fashion. |
DataStream<T> |
DataStream.rescale()
Sets the partitioning of the
DataStream so that the output elements are distributed
evenly to a subset of instances of the next operation in a round-robin fashion. |
protected DataStream<T> |
DataStream.setConnectionType(StreamPartitioner<T> partitioner)
Internal function for setting the partitioner for the DataStream.
|
protected DataStream<T> |
KeyedStream.setConnectionType(StreamPartitioner<T> partitioner) |
DataStream<T> |
DataStream.shuffle()
Sets the partitioning of the
DataStream so that the output elements are shuffled
uniformly randomly to the next operation. |
DataStream<T> |
DataStream.union(DataStream<T>... streams)
Creates a new
DataStream by merging DataStream outputs of the same type with
each other. |
Modifier and Type | Method and Description |
---|---|
DataStream<F> |
IterativeStream.ConnectedIterativeStreams.closeWith(DataStream<F> feedbackStream)
Closes the iteration.
|
DataStream<T> |
IterativeStream.closeWith(DataStream<T> feedbackStream)
Closes the iteration.
|
<T2> CoGroupedStreams<T,T2> |
DataStream.coGroup(DataStream<T2> otherStream)
Creates a join operation.
|
static <OUT> Iterator<OUT> |
DataStreamUtils.collect(DataStream<OUT> stream)
Deprecated.
Please use
executeAndCollect() . |
static <OUT> Iterator<OUT> |
DataStreamUtils.collect(DataStream<OUT> stream,
String executionJobName)
Deprecated.
Please use
executeAndCollect() . |
static <E> List<E> |
DataStreamUtils.collectBoundedStream(DataStream<E> stream,
String jobName)
Deprecated.
Please use
executeAndCollect() . |
static <E> List<E> |
DataStreamUtils.collectUnboundedStream(DataStream<E> stream,
int numElements,
String jobName)
Deprecated.
Please use
executeAndCollect() . |
static <OUT> ClientAndIterator<OUT> |
DataStreamUtils.collectWithClient(DataStream<OUT> stream,
String jobExecutionName)
Deprecated.
Please use
executeAndCollect() . |
<R> ConnectedStreams<T,R> |
DataStream.connect(DataStream<R> dataStream)
Creates a new
ConnectedStreams by connecting DataStream outputs of (possible)
different types with each other. |
<T2> JoinedStreams<T,T2> |
DataStream.join(DataStream<T2> otherStream)
Creates a join operation.
|
static <IN,OUT> SingleOutputStreamOperator<OUT> |
AsyncDataStream.orderedWait(DataStream<IN> in,
AsyncFunction<IN,OUT> func,
long timeout,
TimeUnit timeUnit)
Add an AsyncWaitOperator.
|
static <IN,OUT> SingleOutputStreamOperator<OUT> |
AsyncDataStream.orderedWait(DataStream<IN> in,
AsyncFunction<IN,OUT> func,
long timeout,
TimeUnit timeUnit,
int capacity)
Add an AsyncWaitOperator.
|
static <T,K> KeyedStream<T,K> |
DataStreamUtils.reinterpretAsKeyedStream(DataStream<T> stream,
KeySelector<T,K> keySelector)
|
static <T,K> KeyedStream<T,K> |
DataStreamUtils.reinterpretAsKeyedStream(DataStream<T> stream,
KeySelector<T,K> keySelector,
TypeInformation<K> typeInfo)
|
DataStream<T> |
DataStream.union(DataStream<T>... streams)
Creates a new
DataStream by merging DataStream outputs of the same type with
each other. |
static <IN,OUT> SingleOutputStreamOperator<OUT> |
AsyncDataStream.unorderedWait(DataStream<IN> in,
AsyncFunction<IN,OUT> func,
long timeout,
TimeUnit timeUnit)
Add an AsyncWaitOperator.
|
static <IN,OUT> SingleOutputStreamOperator<OUT> |
AsyncDataStream.unorderedWait(DataStream<IN> in,
AsyncFunction<IN,OUT> func,
long timeout,
TimeUnit timeUnit,
int capacity)
Add an AsyncWaitOperator.
|
Constructor and Description |
---|
AllWindowedStream(DataStream<T> input,
WindowAssigner<? super T,W> windowAssigner) |
BroadcastConnectedStream(StreamExecutionEnvironment env,
DataStream<IN1> input1,
BroadcastStream<IN2> input2,
List<MapStateDescriptor<?,?>> broadcastStateDescriptors) |
BroadcastStream(StreamExecutionEnvironment env,
DataStream<T> input,
MapStateDescriptor<?,?>... broadcastStateDescriptors) |
CoGroupedStreams(DataStream<T1> input1,
DataStream<T2> input2)
Creates new CoGrouped data streams, which are the first step towards building a streaming
co-group.
|
CoGroupedStreams(DataStream<T1> input1,
DataStream<T2> input2)
Creates new CoGrouped data streams, which are the first step towards building a streaming
co-group.
|
ConnectedIterativeStreams(DataStream<I> input,
TypeInformation<F> feedbackType,
long waitTime) |
ConnectedStreams(StreamExecutionEnvironment env,
DataStream<IN1> input1,
DataStream<IN2> input2) |
ConnectedStreams(StreamExecutionEnvironment env,
DataStream<IN1> input1,
DataStream<IN2> input2) |
DataStreamSink(DataStream<T> inputStream,
Sink<T,?,?,?> sink) |
DataStreamSink(DataStream<T> inputStream,
StreamSink<T> operator) |
IterativeStream(DataStream<T> dataStream,
long maxWaitTime) |
JoinedStreams(DataStream<T1> input1,
DataStream<T2> input2)
Creates new JoinedStreams data streams, which are the first step towards building a streaming
co-group.
|
JoinedStreams(DataStream<T1> input1,
DataStream<T2> input2)
Creates new JoinedStreams data streams, which are the first step towards building a streaming
co-group.
|
KeyedStream(DataStream<T> dataStream,
KeySelector<T,KEY> keySelector)
Creates a new
KeyedStream using the given KeySelector to partition operator
state by key. |
KeyedStream(DataStream<T> dataStream,
KeySelector<T,KEY> keySelector,
TypeInformation<KEY> keyType)
Creates a new
KeyedStream using the given KeySelector to partition operator
state by key. |
StreamProjection(DataStream<IN> dataStream,
int[] fieldIndexes) |
WithWindow(DataStream<T1> input1,
DataStream<T2> input2,
KeySelector<T1,KEY> keySelector1,
KeySelector<T2,KEY> keySelector2,
TypeInformation<KEY> keyType,
WindowAssigner<? super CoGroupedStreams.TaggedUnion<T1,T2>,W> windowAssigner,
Trigger<? super CoGroupedStreams.TaggedUnion<T1,T2>,? super W> trigger,
Evictor<? super CoGroupedStreams.TaggedUnion<T1,T2>,? super W> evictor,
Time allowedLateness) |
WithWindow(DataStream<T1> input1,
DataStream<T2> input2,
KeySelector<T1,KEY> keySelector1,
KeySelector<T2,KEY> keySelector2,
TypeInformation<KEY> keyType,
WindowAssigner<? super CoGroupedStreams.TaggedUnion<T1,T2>,W> windowAssigner,
Trigger<? super CoGroupedStreams.TaggedUnion<T1,T2>,? super W> trigger,
Evictor<? super CoGroupedStreams.TaggedUnion<T1,T2>,? super W> evictor,
Time allowedLateness) |
WithWindow(DataStream<T1> input1,
DataStream<T2> input2,
KeySelector<T1,KEY> keySelector1,
KeySelector<T2,KEY> keySelector2,
TypeInformation<KEY> keyType,
WindowAssigner<? super CoGroupedStreams.TaggedUnion<T1,T2>,W> windowAssigner,
Trigger<? super CoGroupedStreams.TaggedUnion<T1,T2>,? super W> trigger,
Evictor<? super CoGroupedStreams.TaggedUnion<T1,T2>,? super W> evictor,
Time allowedLateness) |
WithWindow(DataStream<T1> input1,
DataStream<T2> input2,
KeySelector<T1,KEY> keySelector1,
KeySelector<T2,KEY> keySelector2,
TypeInformation<KEY> keyType,
WindowAssigner<? super CoGroupedStreams.TaggedUnion<T1,T2>,W> windowAssigner,
Trigger<? super CoGroupedStreams.TaggedUnion<T1,T2>,? super W> trigger,
Evictor<? super CoGroupedStreams.TaggedUnion<T1,T2>,? super W> evictor,
Time allowedLateness) |
Modifier and Type | Method and Description |
---|---|
DataStream<String> |
StreamExecutionEnvironment.readFileStream(String filePath,
long intervalMillis,
FileMonitoringFunction.WatchType watchType)
Deprecated.
|
Constructor and Description |
---|
CollectStreamSink(DataStream<T> inputStream,
CollectSinkOperatorFactory<T> factory) |
Modifier and Type | Field and Description |
---|---|
protected DataStream<IN> |
CassandraSink.CassandraSinkBuilder.input |
Modifier and Type | Method and Description |
---|---|
static <IN> CassandraSink.CassandraSinkBuilder<IN> |
CassandraSink.addSink(DataStream<IN> input)
Writes a DataStream into a Cassandra database.
|
DataStreamSink<?> |
CassandraAppendTableSink.consumeDataStream(DataStream<Row> dataStream) |
Constructor and Description |
---|
CassandraPojoSinkBuilder(DataStream<IN> input,
TypeInformation<IN> typeInfo,
TypeSerializer<IN> serializer) |
CassandraRowSinkBuilder(DataStream<Row> input,
TypeInformation<Row> typeInfo,
TypeSerializer<Row> serializer) |
CassandraScalaProductSinkBuilder(DataStream<IN> input,
TypeInformation<IN> typeInfo,
TypeSerializer<IN> serializer) |
CassandraSinkBuilder(DataStream<IN> input,
TypeInformation<IN> typeInfo,
TypeSerializer<IN> serializer) |
CassandraTupleSinkBuilder(DataStream<IN> input,
TypeInformation<IN> typeInfo,
TypeSerializer<IN> serializer) |
Modifier and Type | Method and Description |
---|---|
static <T> KeyedStream<T,Tuple> |
FlinkKafkaShuffle.persistentKeyBy(DataStream<T> dataStream,
String topic,
int producerParallelism,
int numberOfPartitions,
Properties properties,
int... fields)
Uses Kafka as a message bus to persist keyBy shuffle.
|
static <T,K> KeyedStream<T,K> |
FlinkKafkaShuffle.persistentKeyBy(DataStream<T> dataStream,
String topic,
int producerParallelism,
int numberOfPartitions,
Properties properties,
KeySelector<T,K> keySelector)
Uses Kafka as a message bus to persist keyBy shuffle.
|
static <T> void |
FlinkKafkaShuffle.writeKeyBy(DataStream<T> dataStream,
String topic,
Properties kafkaProperties,
int... fields)
|
static <T,K> void |
FlinkKafkaShuffle.writeKeyBy(DataStream<T> dataStream,
String topic,
Properties kafkaProperties,
KeySelector<T,K> keySelector)
|
Modifier and Type | Method and Description |
---|---|
static DataStream<Tuple2<String,Integer>> |
WindowJoinSampleData.GradeSource.getSource(StreamExecutionEnvironment env,
long rate) |
static DataStream<Tuple2<String,Integer>> |
WindowJoinSampleData.SalarySource.getSource(StreamExecutionEnvironment env,
long rate) |
static DataStream<Tuple3<String,Integer,Integer>> |
WindowJoin.runWindowJoin(DataStream<Tuple2<String,Integer>> grades,
DataStream<Tuple2<String,Integer>> salaries,
long windowSize) |
Modifier and Type | Method and Description |
---|---|
static DataStream<Tuple3<String,Integer,Integer>> |
WindowJoin.runWindowJoin(DataStream<Tuple2<String,Integer>> grades,
DataStream<Tuple2<String,Integer>> salaries,
long windowSize) |
static DataStream<Tuple3<String,Integer,Integer>> |
WindowJoin.runWindowJoin(DataStream<Tuple2<String,Integer>> grades,
DataStream<Tuple2<String,Integer>> salaries,
long windowSize) |
Modifier and Type | Method and Description |
---|---|
<IN> CompletableFuture<Collection<IN>> |
StreamCollector.collect(DataStream<IN> stream) |
Modifier and Type | Method and Description |
---|---|
<T> DataStream<T> |
StreamTableEnvironment.toAppendStream(Table table,
Class<T> clazz)
Deprecated.
Use
StreamTableEnvironment.toDataStream(Table, Class) instead. It integrates with the new type
system and supports all kinds of DataTypes that the table runtime can produce.
The semantics might be slightly different for raw and structured types. Use toDataStream(DataTypes.of(TypeInformation.of(Class))) if TypeInformation should
be used as source of truth. |
<T> DataStream<T> |
StreamTableEnvironment.toAppendStream(Table table,
TypeInformation<T> typeInfo)
Deprecated.
Use
StreamTableEnvironment.toDataStream(Table, Class) instead. It integrates with the new type
system and supports all kinds of DataTypes that the table runtime can produce.
The semantics might be slightly different for raw and structured types. Use toDataStream(DataTypes.of(TypeInformation.of(Class))) if TypeInformation should
be used as source of truth. |
DataStream<Row> |
StreamTableEnvironment.toChangelogStream(Table table)
Converts the given
Table into a DataStream of changelog entries. |
DataStream<Row> |
StreamTableEnvironment.toChangelogStream(Table table,
Schema targetSchema)
Converts the given
Table into a DataStream of changelog entries. |
DataStream<Row> |
StreamTableEnvironment.toChangelogStream(Table table,
Schema targetSchema,
ChangelogMode changelogMode)
Converts the given
Table into a DataStream of changelog entries. |
DataStream<Row> |
StreamTableEnvironment.toDataStream(Table table)
Converts the given
Table into a DataStream . |
<T> DataStream<T> |
StreamTableEnvironment.toDataStream(Table table,
AbstractDataType<?> targetDataType)
|
<T> DataStream<T> |
StreamTableEnvironment.toDataStream(Table table,
Class<T> targetClass)
|
<T> DataStream<Tuple2<Boolean,T>> |
StreamTableEnvironment.toRetractStream(Table table,
Class<T> clazz)
Deprecated.
Use
StreamTableEnvironment.toChangelogStream(Table, Schema) instead. It integrates with the new
type system and supports all kinds of DataTypes and every ChangelogMode
that the table runtime can produce. |
<T> DataStream<Tuple2<Boolean,T>> |
StreamTableEnvironment.toRetractStream(Table table,
TypeInformation<T> typeInfo)
Deprecated.
Use
StreamTableEnvironment.toChangelogStream(Table, Schema) instead. It integrates with the new
type system and supports all kinds of DataTypes and every ChangelogMode
that the table runtime can produce. |
Modifier and Type | Method and Description |
---|---|
<T> void |
StreamTableEnvironment.createTemporaryView(String path,
DataStream<T> dataStream)
Creates a view from the given
DataStream in a given path. |
<T> void |
StreamTableEnvironment.createTemporaryView(String path,
DataStream<T> dataStream,
Expression... fields)
Deprecated.
Use
StreamTableEnvironment.createTemporaryView(String, DataStream, Schema) instead. In most
cases, StreamTableEnvironment.createTemporaryView(String, DataStream) should already be sufficient. It
integrates with the new type system and supports all kinds of DataTypes that the
table runtime can consume. The semantics might be slightly different for raw and
structured types. |
<T> void |
StreamTableEnvironment.createTemporaryView(String path,
DataStream<T> dataStream,
Schema schema)
Creates a view from the given
DataStream in a given path. |
<T> void |
StreamTableEnvironment.createTemporaryView(String path,
DataStream<T> dataStream,
String fields)
|
Table |
StreamTableEnvironment.fromChangelogStream(DataStream<Row> dataStream)
Converts the given
DataStream of changelog entries into a Table . |
Table |
StreamTableEnvironment.fromChangelogStream(DataStream<Row> dataStream,
Schema schema)
Converts the given
DataStream of changelog entries into a Table . |
Table |
StreamTableEnvironment.fromChangelogStream(DataStream<Row> dataStream,
Schema schema,
ChangelogMode changelogMode)
Converts the given
DataStream of changelog entries into a Table . |
<T> Table |
StreamTableEnvironment.fromDataStream(DataStream<T> dataStream)
Converts the given
DataStream into a Table . |
<T> Table |
StreamTableEnvironment.fromDataStream(DataStream<T> dataStream,
Expression... fields)
Deprecated.
Use
StreamTableEnvironment.fromDataStream(DataStream, Schema) instead. In most cases, StreamTableEnvironment.fromDataStream(DataStream) should already be sufficient. It integrates with the new
type system and supports all kinds of DataTypes that the table runtime can
consume. The semantics might be slightly different for raw and structured types. |
<T> Table |
StreamTableEnvironment.fromDataStream(DataStream<T> dataStream,
Schema schema)
Converts the given
DataStream into a Table . |
<T> Table |
StreamTableEnvironment.fromDataStream(DataStream<T> dataStream,
String fields)
Deprecated.
|
<T> void |
StreamTableEnvironment.registerDataStream(String name,
DataStream<T> dataStream)
Deprecated.
|
<T> void |
StreamTableEnvironment.registerDataStream(String name,
DataStream<T> dataStream,
String fields)
|
Modifier and Type | Method and Description |
---|---|
<T> DataStream<T> |
StreamTableEnvironmentImpl.toAppendStream(Table table,
Class<T> clazz) |
<T> DataStream<T> |
StreamTableEnvironmentImpl.toAppendStream(Table table,
TypeInformation<T> typeInfo) |
DataStream<Row> |
StreamTableEnvironmentImpl.toChangelogStream(Table table) |
DataStream<Row> |
StreamTableEnvironmentImpl.toChangelogStream(Table table,
Schema targetSchema) |
DataStream<Row> |
StreamTableEnvironmentImpl.toChangelogStream(Table table,
Schema targetSchema,
ChangelogMode changelogMode) |
DataStream<Row> |
StreamTableEnvironmentImpl.toDataStream(Table table) |
<T> DataStream<T> |
StreamTableEnvironmentImpl.toDataStream(Table table,
AbstractDataType<?> targetDataType) |
<T> DataStream<T> |
StreamTableEnvironmentImpl.toDataStream(Table table,
Class<T> targetClass) |
<T> DataStream<Tuple2<Boolean,T>> |
StreamTableEnvironmentImpl.toRetractStream(Table table,
Class<T> clazz) |
<T> DataStream<Tuple2<Boolean,T>> |
StreamTableEnvironmentImpl.toRetractStream(Table table,
TypeInformation<T> typeInfo) |
Modifier and Type | Method and Description |
---|---|
<T> void |
StreamTableEnvironmentImpl.createTemporaryView(String path,
DataStream<T> dataStream) |
<T> void |
StreamTableEnvironmentImpl.createTemporaryView(String path,
DataStream<T> dataStream,
Expression... fields) |
<T> void |
StreamTableEnvironmentImpl.createTemporaryView(String path,
DataStream<T> dataStream,
Schema schema) |
<T> void |
StreamTableEnvironmentImpl.createTemporaryView(String path,
DataStream<T> dataStream,
String fields) |
Table |
StreamTableEnvironmentImpl.fromChangelogStream(DataStream<Row> dataStream) |
Table |
StreamTableEnvironmentImpl.fromChangelogStream(DataStream<Row> dataStream,
Schema schema) |
Table |
StreamTableEnvironmentImpl.fromChangelogStream(DataStream<Row> dataStream,
Schema schema,
ChangelogMode changelogMode) |
<T> Table |
StreamTableEnvironmentImpl.fromDataStream(DataStream<T> dataStream) |
<T> Table |
StreamTableEnvironmentImpl.fromDataStream(DataStream<T> dataStream,
Expression... fields) |
<T> Table |
StreamTableEnvironmentImpl.fromDataStream(DataStream<T> dataStream,
Schema schema) |
<T> Table |
StreamTableEnvironmentImpl.fromDataStream(DataStream<T> dataStream,
String fields) |
<T> void |
StreamTableEnvironmentImpl.registerDataStream(String name,
DataStream<T> dataStream) |
<T> void |
StreamTableEnvironmentImpl.registerDataStream(String name,
DataStream<T> dataStream,
String fields) |
Modifier and Type | Method and Description |
---|---|
DataStreamSink<?> |
DataStreamSinkProvider.consumeDataStream(DataStream<RowData> dataStream)
Consumes the given Java
DataStream and returns the sink transformation DataStreamSink . |
Modifier and Type | Method and Description |
---|---|
DataStream<RowData> |
DataStreamScanProvider.produceDataStream(StreamExecutionEnvironment execEnv)
Creates a scan Java
DataStream from a StreamExecutionEnvironment . |
Modifier and Type | Method and Description |
---|---|
static <T> DataStream<PartitionCommitInfo> |
StreamingSink.compactionWriter(DataStream<T> inputStream,
long bucketCheckInterval,
StreamingFileSink.BucketsBuilder<T,String,? extends StreamingFileSink.BucketsBuilder<T,String,?>> bucketsBuilder,
FileSystemFactory fsFactory,
Path path,
CompactReader.Factory<T> readFactory,
long targetFileSize,
int parallelism)
Create a file writer with compaction operators by input stream.
|
static <T> DataStream<PartitionCommitInfo> |
StreamingSink.writer(DataStream<T> inputStream,
long bucketCheckInterval,
StreamingFileSink.BucketsBuilder<T,String,? extends StreamingFileSink.BucketsBuilder<T,String,?>> bucketsBuilder,
int parallelism,
List<String> partitionKeys,
Configuration conf)
Create a file writer by input stream.
|
Modifier and Type | Method and Description |
---|---|
static <T> DataStream<PartitionCommitInfo> |
StreamingSink.compactionWriter(DataStream<T> inputStream,
long bucketCheckInterval,
StreamingFileSink.BucketsBuilder<T,String,? extends StreamingFileSink.BucketsBuilder<T,String,?>> bucketsBuilder,
FileSystemFactory fsFactory,
Path path,
CompactReader.Factory<T> readFactory,
long targetFileSize,
int parallelism)
Create a file writer with compaction operators by input stream.
|
static DataStreamSink<?> |
StreamingSink.sink(DataStream<PartitionCommitInfo> writer,
Path locationPath,
ObjectIdentifier identifier,
List<String> partitionKeys,
TableMetaStoreFactory msFactory,
FileSystemFactory fsFactory,
Configuration options)
Create a sink from file writer.
|
static <T> DataStream<PartitionCommitInfo> |
StreamingSink.writer(DataStream<T> inputStream,
long bucketCheckInterval,
StreamingFileSink.BucketsBuilder<T,String,? extends StreamingFileSink.BucketsBuilder<T,String,?>> bucketsBuilder,
int parallelism,
List<String> partitionKeys,
Configuration conf)
Create a file writer by input stream.
|
Modifier and Type | Method and Description |
---|---|
DataStream<E> |
ScalaDataStreamQueryOperation.getDataStream() |
DataStream<E> |
ScalaExternalQueryOperation.getDataStream() |
DataStream<E> |
JavaDataStreamQueryOperation.getDataStream() |
DataStream<E> |
JavaExternalQueryOperation.getDataStream() |
Constructor and Description |
---|
JavaDataStreamQueryOperation(DataStream<E> dataStream,
int[] fieldIndices,
ResolvedSchema resolvedSchema) |
JavaDataStreamQueryOperation(ObjectIdentifier identifier,
DataStream<E> dataStream,
int[] fieldIndices,
ResolvedSchema resolvedSchema) |
JavaExternalQueryOperation(ObjectIdentifier identifier,
DataStream<E> dataStream,
DataType physicalDataType,
boolean isTopLevelRecord,
ChangelogMode changelogMode,
ResolvedSchema resolvedSchema) |
ScalaDataStreamQueryOperation(DataStream<E> dataStream,
int[] fieldIndices,
ResolvedSchema resolvedSchema) |
ScalaDataStreamQueryOperation(ObjectIdentifier identifier,
DataStream<E> dataStream,
int[] fieldIndices,
ResolvedSchema resolvedSchema) |
ScalaExternalQueryOperation(ObjectIdentifier identifier,
DataStream<E> dataStream,
DataType physicalDataType,
boolean isTopLevelRecord,
ChangelogMode changelogMode,
ResolvedSchema resolvedSchema) |
Modifier and Type | Method and Description |
---|---|
static org.apache.calcite.rel.RelNode |
DynamicSourceUtils.convertDataStreamToRel(boolean isBatchMode,
ReadableConfig config,
org.apache.flink.table.planner.calcite.FlinkRelBuilder relBuilder,
ObjectIdentifier identifier,
ResolvedSchema schema,
DataStream<?> dataStream,
DataType physicalDataType,
boolean isTopLevelRecord,
ChangelogMode changelogMode)
Converts a given
DataStream to a RelNode . |
Modifier and Type | Method and Description |
---|---|
DataStream<E> |
DataStreamQueryOperation.getDataStream() |
Constructor and Description |
---|
DataStreamQueryOperation(ObjectIdentifier identifier,
DataStream<E> dataStream,
int[] fieldIndices,
ResolvedSchema resolvedSchema,
boolean[] fieldNullables,
org.apache.flink.table.planner.plan.stats.FlinkStatistic statistic) |
Modifier and Type | Method and Description |
---|---|
DataStream<?> |
BatchExecBoundedStreamScan.getDataStream() |
Constructor and Description |
---|
BatchExecBoundedStreamScan(DataStream<?> dataStream,
DataType sourceType,
int[] fieldIndexes,
List<String> qualifiedName,
RowType outputType,
String description) |
Modifier and Type | Method and Description |
---|---|
DataStream<?> |
StreamExecDataStreamScan.getDataStream() |
Constructor and Description |
---|
StreamExecDataStreamScan(DataStream<?> dataStream,
DataType sourceType,
int[] fieldIndexes,
String[] fieldNames,
List<String> qualifiedName,
RowType outputType,
String description) |
Modifier and Type | Method and Description |
---|---|
DataStream<RowData> |
ArrowTableSource.getDataStream(StreamExecutionEnvironment execEnv) |
Modifier and Type | Method and Description |
---|---|
DataStreamSink<?> |
CsvTableSink.consumeDataStream(DataStream<Row> dataStream)
Deprecated.
|
DataStreamSink<T> |
OutputFormatTableSink.consumeDataStream(DataStream<T> dataStream)
Deprecated.
|
DataStreamSink<?> |
StreamTableSink.consumeDataStream(DataStream<T> dataStream)
Deprecated.
Consumes the DataStream and return the sink transformation
DataStreamSink . |
Modifier and Type | Method and Description |
---|---|
DataStream<T> |
InputFormatTableSource.getDataStream(StreamExecutionEnvironment execEnv)
Deprecated.
|
DataStream<Row> |
CsvTableSource.getDataStream(StreamExecutionEnvironment execEnv)
Deprecated.
|
DataStream<T> |
StreamTableSource.getDataStream(StreamExecutionEnvironment execEnv)
Deprecated.
Returns the data of the table as a
DataStream . |
Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.