Class and Description |
---|
org.apache.flink.streaming.util.serialization.AbstractDeserializationSchema
Use
AbstractDeserializationSchema instead. |
org.apache.flink.streaming.api.functions.AscendingTimestampExtractor
Extend
AscendingTimestampExtractor instead. |
org.apache.flink.formats.avro.typeutils.AvroSerializer.AvroSchemaSerializerConfigSnapshot |
org.apache.flink.streaming.api.windowing.assigners.BaseAlignedWindowAssigner
will be removed in a future version. please use other
WindowAssigner s listed under
org.apache.flink.streaming.api.windowing.assigners . |
org.apache.flink.streaming.runtime.io.BufferSpiller |
org.apache.flink.streaming.runtime.io.BufferSpiller.SpilledBufferOrEventSequence |
org.apache.flink.batch.connectors.cassandra.CassandraOutputFormat
Please use CassandraTupleOutputFormat instead.
|
org.apache.flink.api.common.typeutils.base.CollectionSerializerConfigSnapshot
this snapshot class should no longer be used by any serializers as their snapshot.
|
org.apache.flink.api.common.typeutils.CompatibilityUtil
this utility class still uses the old serializer compatibility interfaces, and
is therefore deprecated. See
TypeSerializerConfigSnapshot.resolveSchemaCompatibility(TypeSerializer)
and TypeSerializerSchemaCompatibility . |
org.apache.flink.streaming.connectors.fs.DateTimeBucketer
use
DateTimeBucketer instead. |
org.apache.flink.api.java.typeutils.runtime.EitherSerializerConfigSnapshot |
org.apache.flink.streaming.runtime.operators.ExtractTimestampsOperator |
org.apache.flink.streaming.api.functions.source.FileMonitoringFunction
Internal class deprecated in favour of
ContinuousFileMonitoringFunction . |
org.apache.flink.streaming.api.functions.source.FileReadFunction
Internal class deprecated in favour of
ContinuousFileMonitoringFunction . |
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer081 |
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer082 |
org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaDelegatePartitioner
Delegate for
KafkaPartitioner , use FlinkKafkaPartitioner instead |
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer010.FlinkKafkaProducer010Configuration
This class is deprecated since the factory methods
writeToKafkaWithTimestamps
for the producer are also deprecated. |
org.apache.flink.streaming.api.functions.windowing.FoldApplyAllWindowFunction
will be removed in a future version
|
org.apache.flink.streaming.api.functions.windowing.FoldApplyProcessAllWindowFunction
will be removed in a future version
|
org.apache.flink.streaming.api.functions.windowing.FoldApplyProcessWindowFunction
will be removed in a future version
|
org.apache.flink.streaming.api.functions.windowing.FoldApplyWindowFunction
will be removed in a future version
|
org.apache.flink.api.common.state.FoldingStateDescriptor
will be removed in a future version in favor of
AggregatingStateDescriptor |
org.apache.flink.queryablestate.client.state.ImmutableFoldingState |
org.apache.flink.runtime.rest.handler.job.metrics.JobVertexMetricsHandler
This class is subsumed by
SubtaskMetricsHandler and is only kept for
backwards-compatibility. |
org.apache.flink.streaming.util.serialization.JSONDeserializationSchema
Please use
JsonNodeDeserializationSchema in the "flink-json" module. |
org.apache.flink.streaming.util.serialization.JsonRowDeserializationSchema
Please use
JsonRowDeserializationSchema in
the "flink-json" module. |
org.apache.flink.streaming.util.serialization.JsonRowSerializationSchema
Please use
JsonRowSerializationSchema in
the "flink-json" module. |
org.apache.flink.streaming.connectors.kafka.Kafka010AvroTableSource
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka010AvroTableSource.Builder
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka010JsonTableSink
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka010JsonTableSource
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka010JsonTableSource.Builder
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka011AvroTableSource
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka011AvroTableSource.Builder
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka011JsonTableSource
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka011JsonTableSource.Builder
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka08AvroTableSource
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka08AvroTableSource.Builder
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka08JsonTableSink
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka08JsonTableSource
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka08JsonTableSource.Builder
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka09AvroTableSource
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka09AvroTableSource.Builder
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka09JsonTableSink
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka09JsonTableSource
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.Kafka09JsonTableSource.Builder
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.KafkaAvroTableSource
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.KafkaAvroTableSource.Builder
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.KafkaJsonTableSink
Use table descriptors instead of implementation-specific classes.
|
org.apache.flink.streaming.connectors.kafka.KafkaJsonTableSource
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.KafkaJsonTableSource.Builder
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.connectors.kafka.partitioner.KafkaPartitioner
This partitioner does not handle partitioning properly in the case of
multiple topics, and has been deprecated. Please use
FlinkKafkaPartitioner instead. |
org.apache.flink.streaming.connectors.kafka.KafkaTableSourceBase.Builder
Use the
Kafka descriptor together
with descriptors for schema and format instead. Descriptors allow for
implementation-agnostic definition of tables. See also
TableEnvironment.connect(ConnectorDescriptor) . |
org.apache.flink.streaming.api.operators.LegacyKeyedProcessOperator
Replaced by
KeyedProcessOperator which takes KeyedProcessFunction |
org.apache.flink.api.common.typeutils.base.MapSerializerConfigSnapshot
this snapshot class should not be used by any serializer anymore.
|
org.apache.flink.test.util.MiniClusterResource
This class should be replaced with
MiniClusterWithClientResource . |
org.apache.flink.test.util.MiniClusterResourceConfiguration
This class should be replaced with
MiniClusterResourceConfiguration . |
org.apache.flink.cep.nfa.NFA.NFASerializer |
org.apache.flink.cep.nfa.NFA.NFASerializerConfigSnapshot |
org.apache.flink.streaming.connectors.fs.NonRollingBucketer
use
BasePathBucketer instead. |
org.apache.flink.streaming.api.functions.sink.OutputFormatSinkFunction
Please use the
BucketingSink for writing to files from a streaming program. |
org.apache.flink.runtime.webmonitor.handlers.ProgramArgsQueryParameter
please, use
JarRequestBody.FIELD_NAME_PROGRAM_ARGUMENTS_LIST |
org.apache.flink.api.common.functions.RichFoldFunction
use
RichAggregateFunction instead |
org.apache.flink.streaming.api.functions.windowing.RichProcessAllWindowFunction
use
ProcessAllWindowFunction instead |
org.apache.flink.streaming.api.functions.windowing.RichProcessWindowFunction
use
ProcessWindowFunction instead |
org.apache.flink.streaming.connectors.fs.RollingSink
use
BucketingSink instead. |
org.apache.flink.cep.nfa.SharedBuffer
everything in this class is deprecated. Those are only migration procedures from older versions.
|
org.apache.flink.streaming.util.serialization.SimpleStringSchema
Use
SimpleStringSchema instead. |
org.apache.flink.streaming.api.windowing.assigners.SlidingTimeWindows
Please use
SlidingEventTimeWindows . |
org.apache.flink.streaming.api.datastream.SplitStream |
org.apache.flink.streaming.api.operators.StreamGroupedFold
will be removed in a future version
|
org.apache.flink.runtime.checkpoint.TaskState
Internal class for savepoint backwards compatibility. Don't use for other purposes.
|
org.apache.flink.streaming.api.windowing.assigners.TumblingTimeWindows
Please use
TumblingEventTimeWindows . |
org.apache.flink.streaming.util.serialization.TypeInformationSerializationSchema
Use
TypeInformationSerializationSchema instead. |
org.apache.flink.api.common.typeutils.TypeSerializerConfigSnapshot |
org.apache.flink.api.common.typeutils.TypeSerializerSerializationUtil
This utility class was used to write serializers into checkpoints.
Starting from Flink 1.6.x, this should no longer happen, and therefore
this class is deprecated. It remains here for backwards compatibility paths.
|
org.apache.flink.streaming.api.functions.sink.WriteFormat
Please use the
BucketingSink for writing to files from a streaming program. |
org.apache.flink.streaming.api.functions.sink.WriteFormatAsCsv
Please use the
BucketingSink for writing to files from a streaming program. |
org.apache.flink.streaming.api.functions.sink.WriteFormatAsText
Please use the
BucketingSink for writing to files from a streaming program. |
org.apache.flink.streaming.api.functions.sink.WriteSinkFunction
Please use the
BucketingSink for writing to files from a streaming program. |
org.apache.flink.streaming.api.functions.sink.WriteSinkFunctionByMillis
Please use the
BucketingSink for writing to files from a streaming program. |
org.apache.flink.runtime.rest.messages.YarnCancelJobTerminationHeaders
This should be removed once we can send arbitrary REST calls via the Yarn proxy.
|
org.apache.flink.runtime.rest.messages.YarnStopJobTerminationHeaders
This should be removed once we can send arbitrary REST calls via the Yarn proxy.
|
Enum Constant and Description |
---|
org.apache.flink.api.common.state.StateDescriptor.Type.UNKNOWN
Enum for migrating from old checkpoints/savepoint versions.
|
Copyright © 2014–2020 The Apache Software Foundation. All rights reserved.