Interface and Description |
---|
org.apache.flink.table.sinks.AppendStreamTableSink
This interface has been replaced by
DynamicTableSink . The new interface
consumes internal data structures. See FLIP-95 for more information. |
org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks |
org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks |
org.apache.flink.table.connector.source.AsyncTableFunctionProvider
Please use
AsyncLookupFunctionProvider to implement asynchronous lookup
table. |
org.apache.flink.table.catalog.CatalogLock
This interface will be removed soon. Please see FLIP-346 for more details.
|
org.apache.flink.table.catalog.CatalogLock.Factory
This interface will be removed soon. Please see FLIP-346 for more details.
|
org.apache.flink.runtime.state.CheckpointListener
This interface has been moved to
CheckpointListener . This class is kept to maintain
backwards compatibility and will be removed in future releases. |
org.apache.flink.api.connector.sink.Committer
Please use
Committer . |
org.apache.flink.table.api.constraints.Constraint
See
ResolvedSchema and Constraint . |
org.apache.flink.api.java.operators.CustomUnaryOperation
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.sources.DefinedFieldMapping
This interface will not be supported in the new source design around
DynamicTableSource . See FLIP-95 for more information. |
org.apache.flink.table.sources.DefinedProctimeAttribute
This interface will not be supported in the new source design around
DynamicTableSource . Use the concept of computed columns instead. See FLIP-95 for more
information. |
org.apache.flink.table.sources.DefinedRowtimeAttributes
This interface will not be supported in the new source design around
DynamicTableSource . Use the concept of computed columns instead. See FLIP-95 for more
information. |
org.apache.flink.table.descriptors.Descriptor
Descriptor was primarily used for the legacy connector stack and have been
deprecated. Use TableDescriptor for creating sources and sinks from the Table API. |
org.apache.flink.table.descriptors.DescriptorValidator
See
Descriptor for details. |
org.apache.flink.streaming.util.serialization.DeserializationSchema
Use
DeserializationSchema instead. |
org.apache.flink.api.java.ExecutionEnvironmentFactory
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.streaming.api.checkpoint.ExternallyInducedSource
This interface is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.table.sources.FieldComputer
This interface will not be supported in the new source design around
DynamicTableSource . Use the concept of computed columns instead. See FLIP-95 for more
information. |
org.apache.flink.connector.file.src.reader.FileRecordFormat
Please use
StreamFormat instead. The main motivation for removing it is the
inherent design flaw in the batching of FileRecordFormat: StreamFormat can guarantee that
only a certain amount of memory is being used (unless a single record exceeds that already),
but FileRecordFormat can only batch by the number of records. By removing FileRecordFormat,
we relay the responsibility of implementing the batching to the format developer; they need
to use BulkFormat and find a better way than batch by number of records. |
org.apache.flink.table.sources.FilterableTableSource
This interface will not be supported in the new source design around
DynamicTableSource . Use SupportsFilterPushDown instead. See FLIP-95 for more
information. |
org.apache.flink.api.connector.sink.GlobalCommitter
Please use
WithPostCommitTopology with StandardSinkTopologies#addGlobalCommitter . |
org.apache.flink.api.java.operators.join.JoinFunctionAssigner
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.types.Key
The Key type is a relict of a deprecated and removed API and will be removed in
future (2.0) versions as well.
|
org.apache.flink.table.sources.LimitableTableSource
This interface will not be supported in the new source design around
DynamicTableSource . Use SupportsLimitPushDown instead. See FLIP-95 for more
information. |
org.apache.flink.streaming.api.checkpoint.ListCheckpointed
If you need to do non-keyed state snapshots of your operator, use
CheckpointedFunction . This should only be needed in rare cases, though. |
org.apache.flink.table.sources.LookupableTableSource
This interface will not be supported in the new source design around
DynamicTableSource . Use LookupTableSource instead. See FLIP-95 for more information. |
org.apache.flink.table.factories.ManagedTableFactory
This interface will be removed soon. Please see FLIP-346 for more details.
|
org.apache.flink.table.sources.NestedFieldsProjectableTableSource
This interface will not be supported in the new source design around
DynamicTableSource . Use SupportsProjectionPushDown instead. See FLIP-95 for more
information. |
org.apache.flink.table.sinks.OverwritableTableSink
This interface will not be supported in the new sink design around
DynamicTableSink . Use SupportsOverwrite instead. See FLIP-95 for more information. |
org.apache.flink.streaming.api.functions.source.ParallelSourceFunction
This interface is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.table.sinks.PartitionableTableSink
This interface will not be supported in the new sink design around
DynamicTableSink . Use SupportsPartitioning instead. See FLIP-95 for more
information. |
org.apache.flink.table.sources.PartitionableTableSource
This interface will not be supported in the new source design around
DynamicTableSource . Use SupportsPartitionPushDown instead. See FLIP-95 for more
information. |
org.apache.flink.table.sources.ProjectableTableSource
This interface will not be supported in the new source design around
DynamicTableSource . Use SupportsProjectionPushDown instead. See FLIP-95 for more
information. |
org.apache.flink.table.connector.RequireCatalogLock
This interface will be removed soon. Please see FLIP-346 for more details.
|
org.apache.flink.table.sinks.RetractStreamTableSink
This interface has been replaced by
DynamicTableSink . The new interface
consumes internal data structures. See FLIP-95 for more information. |
org.apache.flink.streaming.util.serialization.SerializationSchema
Use
SerializationSchema instead. |
org.apache.flink.streaming.api.operators.SetupableStreamOperator
This class is deprecated in favour of using
StreamOperatorFactory and it's
StreamOperatorFactory.createStreamOperator(org.apache.flink.streaming.api.operators.StreamOperatorParameters<OUT>) and passing the required parameters to the
Operator's constructor in create method. |
org.apache.flink.api.connector.sink.Sink
Please use
Sink or a derivative. |
org.apache.flink.api.connector.sink2.Sink.InitContext |
org.apache.flink.api.connector.sink.Sink.InitContext
Please migrate to
Sink and use
Sink.InitContext . |
org.apache.flink.api.connector.sink.Sink.ProcessingTimeService
Please migrate to
Sink and use
ProcessingTimeService . |
org.apache.flink.api.connector.sink.Sink.ProcessingTimeService.ProcessingTimeCallback
Please migrate to
Sink and use
ProcessingTimeService.ProcessingTimeCallback . |
org.apache.flink.streaming.api.functions.sink.SinkFunction
This interface will be removed in future versions. Use the new
Sink interface instead. |
org.apache.flink.table.connector.sink.SinkProvider
Please convert your sink to
Sink and use
SinkV2Provider . |
org.apache.flink.api.connector.sink.SinkWriter
Please use
SinkWriter or a derivative. |
org.apache.flink.api.connector.sink.SinkWriter.Context
Please migrate to
SinkWriter and use
SinkWriter.Context . |
org.apache.flink.streaming.api.functions.source.SourceFunction
This interface will be removed in future versions. Use the new
Source interface instead. NOTE: All sub-tasks from
FLINK-28045 must be closed before this API can be completely removed. |
org.apache.flink.table.connector.source.SourceFunctionProvider
This interface is based on the
SourceFunction API, which is due to be
removed. Use SourceProvider instead. |
org.apache.flink.api.connector.sink2.StatefulSink
Please implement
Sink and SupportsWriterState instead. |
org.apache.flink.table.sinks.StreamTableSink
This interface has been replaced by
DynamicTableSink . The new interface
consumes internal data structures. See FLIP-95 for more information. |
org.apache.flink.table.factories.StreamTableSinkFactory
This interface has been replaced by
DynamicTableSinkFactory . The new
interface creates instances of DynamicTableSink . See FLIP-95 for more information. |
org.apache.flink.table.sources.StreamTableSource
This interface has been replaced by
DynamicTableSource . The new interface
produces internal data structures. See FLIP-95 for more information. |
org.apache.flink.table.factories.StreamTableSourceFactory
This interface has been replaced by
DynamicTableSourceFactory . The new
interface creates instances of DynamicTableSource . See FLIP-95 for more information. |
org.apache.flink.table.factories.TableFactory
This interface has been replaced by
Factory . |
org.apache.flink.table.connector.source.TableFunctionProvider
Please use
LookupFunctionProvider to implement synchronous lookup table. |
org.apache.flink.table.sinks.TableSink
This interface has been replaced by
DynamicTableSink . The new interface
consumes internal data structures. See FLIP-95 for more information. |
org.apache.flink.table.factories.TableSinkFactory
This interface has been replaced by
DynamicTableSinkFactory . The new
interface consumes internal data structures. See FLIP-95 for more information. |
org.apache.flink.table.sources.TableSource
This interface has been replaced by
DynamicTableSource . The new interface
produces internal data structures. See FLIP-95 for more information. |
org.apache.flink.table.factories.TableSourceFactory
This interface has been replaced by
DynamicTableSourceFactory . The new
interface produces internal data structures. See FLIP-95 for more information. |
org.apache.flink.streaming.api.functions.TimestampAssigner |
org.apache.flink.api.connector.sink2.TwoPhaseCommittingSink
Please implement
Sink SupportsCommitter instead. |
org.apache.flink.api.connector.sink2.TwoPhaseCommittingSink.PrecommittingSinkWriter |
org.apache.flink.api.java.operators.UdfOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.sinks.UpsertStreamTableSink
This interface has been replaced by
DynamicTableSink . The new interface
consumes internal data structures. See FLIP-95 for more information. |
org.apache.flink.table.runtime.groupwindow.WindowProperty
The POJOs in this package are used to represent the deprecated Group Window feature.
Currently, they also used to configure Python operators.
|
org.apache.flink.streaming.api.connector.sink2.WithPostCommitTopology
Please implement
Sink , SupportsCommitter and SupportsPostCommitTopology instead. |
org.apache.flink.streaming.api.connector.sink2.WithPreCommitTopology
Please implement
Sink , SupportsCommitter and SupportsPreCommitTopology
instead. |
org.apache.flink.streaming.api.connector.sink2.WithPreWriteTopology
Please implement
Sink and SupportsPreWriteTopology instead. |
org.apache.flink.streaming.api.operators.YieldingOperatorFactory |
Class and Description |
---|
org.apache.flink.streaming.util.serialization.AbstractDeserializationSchema
Use
AbstractDeserializationSchema
instead. |
org.apache.flink.runtime.state.filesystem.AbstractFileStateBackend
State backends should no longer implement
CheckpointStorage functionality.
Please inherit AbstractStateBackend instead. Custom checkpoint storage can be
additionally implemented as a separate class. |
org.apache.flink.client.deployment.executors.AbstractJobClusterExecutor |
org.apache.flink.test.util.AbstractTestBaseJUnit4
Use
AbstractTestBase instead. |
org.apache.flink.table.runtime.groupwindow.AbstractWindowProperty
The POJOs in this package are used to represent the deprecated Group Window feature.
Currently, they also used to configure Python operators.
|
org.apache.flink.table.functions.AggregateFunctionDefinition
Non-legacy functions can simply omit this wrapper for declarations.
|
org.apache.flink.api.java.operators.AggregateOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.configuration.AkkaOptions
Use
RpcOptions instead. |
org.apache.flink.table.operations.ddl.AlterTableOptionsOperation
Please use
AlterTableChangeOperation instead. |
org.apache.flink.cep.pattern.conditions.AndCondition
Please use
RichAndCondition instead. This class exists just for backwards
compatibility and will be removed in FLINK-10113. |
org.apache.flink.table.runtime.arrow.sources.ArrowSourceFunction
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.streaming.api.functions.AscendingTimestampExtractor
Extend
AscendingTimestampExtractor instead. |
org.apache.flink.streaming.api.functions.timestamps.AscendingTimestampExtractor |
org.apache.flink.formats.avro.AvroRowDeserializationSchema
The format was developed for the Table API users and will not be maintained for
DataStream API users anymore. Either use Table API or switch to Data Stream, defining your
own
DeserializationSchema . |
org.apache.flink.formats.avro.AvroRowSerializationSchema
The format was developed for the Table API users and will not be maintained for
DataStream API users anymore. Either use Table API or switch to Data Stream, defining your
own
SerializationSchema . |
org.apache.flink.formats.avro.typeutils.AvroSerializer.AvroSchemaSerializerConfigSnapshot |
org.apache.flink.api.java.summarize.BooleanColumnSummary
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.state.api.BootstrapTransformation
Use
StateBootstrapTransformation instead. |
org.apache.flink.state.api.runtime.BootstrapTransformationWithID |
org.apache.flink.state.api.output.BoundedOneInputStreamTaskRunner |
org.apache.flink.table.catalog.CatalogTableImpl
Use
CatalogTable.of(Schema, String, List, Map) or a custom implementation
instead. Don't implement against this internal class. It can lead to unintended side effects
if code checks against this class instead of the common interface. |
org.apache.flink.table.catalog.CatalogViewImpl
Use
CatalogView.of(Schema, String, String, String, Map) or a custom
implementation instead. Don't implement against this internal class. It can lead to
unintended side effects if code checks against this class instead of the common interface. |
org.apache.flink.api.java.operators.CoGroupOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.CollectionEnvironment
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.io.CollectionInputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.summarize.ColumnSummary
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.descriptors.ConnectorDescriptorValidator |
org.apache.flink.streaming.api.functions.source.ContinuousFileMonitoringFunction
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.api.java.operators.CrossOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.sinks.CsvAppendTableSinkFactory
The legacy CSV connector has been replaced by
FileSink . It is kept only to
support tests for the legacy connector stack. |
org.apache.flink.table.sources.CsvAppendTableSourceFactory
The legacy CSV connector has been replaced by
FileSource . It is kept only to
support tests for the legacy connector stack. |
org.apache.flink.table.sinks.CsvBatchTableSinkFactory
The legacy CSV connector has been replaced by
FileSink . It is kept only to
support tests for the legacy connector stack. |
org.apache.flink.table.sources.CsvBatchTableSourceFactory
The legacy CSV connector has been replaced by
FileSource . It is kept only to
support tests for the legacy connector stack. |
org.apache.flink.api.java.io.CsvOutputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.io.CsvReader
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.formats.csv.CsvRowDeserializationSchema
The format was developed for the Table API users and will not be maintained for
DataStream API users anymore. Either use Table API or switch to Data Stream, defining your
own
DeserializationSchema . |
org.apache.flink.formats.csv.CsvRowSerializationSchema
The format was developed for the Table API users and will not be maintained for
DataStream API users anymore. Either use Table API or switch to Data Stream, defining your
own
SerializationSchema . |
org.apache.flink.table.sinks.CsvTableSink
The legacy CSV connector has been replaced by
FileSink . It is kept only to
support tests for the legacy connector stack. |
org.apache.flink.table.sinks.CsvTableSinkFactoryBase
The legacy CSV connector has been replaced by
FileSink . It is kept only to
support tests for the legacy connector stack. |
org.apache.flink.table.sources.CsvTableSource
The legacy CSV connector has been replaced by
FileSource . It is kept only to
support tests for the legacy connector stack. |
org.apache.flink.table.sources.CsvTableSourceFactoryBase
The legacy CSV connector has been replaced by
FileSource . It is kept only to
support tests for the legacy connector stack. |
org.apache.flink.streaming.api.functions.source.datagen.DataGeneratorSource
Use
org.apache.flink.connector.datagen.source.DataGeneratorSource instead. |
org.apache.flink.api.java.DataSet
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.utils.DataSetUtils
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.operators.DataSink
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.operators.DataSource
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.operations.DataStreamQueryOperation |
org.apache.flink.contrib.streaming.state.DefaultConfigurableOptionsFactory |
org.apache.flink.streaming.util.typeutils.DefaultScalaProductFieldAccessorFactory
All Flink Scala APIs are deprecated and will be removed in a future Flink major
version. You can still build your application in Scala, but you should move to the Java
version of either the DataStream and/or Table API.
|
org.apache.flink.api.java.operators.DeltaIteration
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.operators.DeltaIterationResultSet
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.descriptors.DescriptorProperties
This utility will be dropped soon.
DynamicTableFactory is based on ConfigOption and catalogs use CatalogPropertiesUtil . |
org.apache.flink.api.java.io.DiscardingOutputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.streaming.api.functions.sink.DiscardingSink
This interface will be removed in future versions. Use the new
DiscardingSink interface instead. |
org.apache.flink.api.java.operators.DistinctOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.typeutils.runtime.EitherSerializerSnapshot
this snapshot class is no longer used by any serializers. Instead,
JavaEitherSerializerSnapshot is used. |
org.apache.flink.state.api.EvictingWindowReader |
org.apache.flink.streaming.api.environment.ExecutionCheckpointingOptions
All configuration items in this class have been moved to
CheckpointingOptions . |
org.apache.flink.api.common.ExecutionConfig.SerializableSerializer
The class is deprecated because instance-type serializer definition where
serializers are serialized and written into the snapshot and deserialized for use is
deprecated. Use class-type serializer definition instead, where only the class name is
written into the snapshot and new instance of the serializer is created for use. This is
a breaking change, and it will be removed in Flink 2.0.
|
org.apache.flink.api.java.ExecutionEnvironment
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.sources.tsextractors.ExistingField
This class will not be supported in the new source design around
DynamicTableSource . See FLIP-95 for more
information. |
org.apache.flink.state.api.ExistingSavepoint
For creating a new savepoint, use
SavepointWriter and the data stream api
under batch execution. For reading a savepoint, use SavepointReader and the data
stream api under batch execution. |
org.apache.flink.streaming.api.functions.source.FileMonitoringFunction
Internal class deprecated in favour of
ContinuousFileMonitoringFunction . |
org.apache.flink.streaming.api.functions.source.FileReadFunction
Internal class deprecated in favour of
ContinuousFileMonitoringFunction . |
org.apache.flink.connector.file.src.impl.FileRecordFormatAdapter
Please use
StreamFormatAdapter instead. |
org.apache.flink.connector.file.sink.FileSinkProgram.Generator
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.table.descriptors.FileSystemValidator
The legacy CSV connector has been replaced by
FileSource / FileSink .
It is kept only to support tests for the legacy connector stack. |
org.apache.flink.api.java.operators.FilterOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.streaming.util.FiniteTestSource
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.api.java.functions.FlatMapIterator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.operators.FlatMapOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.core.testutils.FlinkMatchers
You should assertj assertions, which have built-in assertions for
CompletableFuture . To check chains of Throwable causes, use FlinkAssertions.anyCauseMatches(String) or FlinkAssertions.anyCauseMatches(Class,
String) |
org.apache.flink.streaming.api.functions.source.FromElementsFunction
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.streaming.api.functions.source.FromIteratorFunction
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.streaming.api.functions.source.FromSplittableIteratorFunction
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.runtime.state.filesystem.FsStateBackend |
org.apache.flink.api.java.functions.FunctionAnnotation
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.common.typeutils.base.GenericArraySerializerConfigSnapshot
this is deprecated and no longer used by the
GenericArraySerializer . It has
been replaced by GenericArraySerializerSnapshot . |
org.apache.flink.api.java.operators.GroupCombineOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.operators.Grouping
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.functions.GroupReduceIterator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.operators.GroupReduceOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.state.api.output.partitioner.HashSelector |
org.apache.flink.connectors.hive.read.HiveBulkFormatAdapter
Please use
HiveInputFormat . |
org.apache.flink.table.catalog.hive.HiveCatalogLock
This class will be removed soon. Please see FLIP-346 for more details.
|
org.apache.flink.streaming.api.functions.IngestionTimeExtractor |
org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.table.sources.InputFormatTableSource
This interface has been replaced by
DynamicTableSource . The new interface
produces internal data structures. See FLIP-95 for more information. |
org.apache.flink.table.planner.operations.InternalDataStreamQueryOperation |
org.apache.flink.api.java.operators.IterativeDataSet
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.streaming.api.datastream.IterativeStream
This method is deprecated since Flink 1.19. The only known use case of this Iteration
API comes from Flink ML, which already has its own implementation of iteration and no longer
uses this API. If there's any use cases other than Flink ML that needs iteration support,
please reach out to dev@flink.apache.org and we can consider making the Flink ML iteration
implementation a separate common library.
|
org.apache.flink.streaming.api.datastream.IterativeStream.ConnectedIterativeStreams
This method is deprecated since Flink 1.19. The only known use case of this
Iteration API comes from Flink ML, which already has its own implementation of iteration
and no longer uses this API. If there's any use cases other than Flink ML that needs
iteration support, please reach out to dev@flink.apache.org and we can consider making
the Flink ML iteration implementation a separate common library.
|
org.apache.flink.api.java.io.IteratorInputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.test.util.JavaProgramTestBaseJUnit4
Use
JavaProgramTestBase instead. |
org.apache.flink.runtime.entrypoint.JobClusterEntrypoint
Per-job mode has been deprecated in Flink 1.15 and will be removed in the future.
Please use application mode instead.
|
org.apache.flink.runtime.rest.messages.JobExceptionsInfo.ExecutionExceptionInfo
ExecutionExceptionInfo will be replaced by JobExceptionsInfoWithHistory.ExceptionInfo as part of the effort of deprecating JobExceptionsInfo.allExceptions . |
org.apache.flink.runtime.rest.handler.job.metrics.JobVertexMetricsHandler
This class is subsumed by
SubtaskMetricsHandler and is only kept for
backwards-compatibility. |
org.apache.flink.api.java.operators.JoinOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.operators.join.JoinOperatorSetsBase
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.formats.json.JsonNodeDeserializationSchema
Use
new JsonDeserializationSchema(ObjectNode.class) instead |
org.apache.flink.formats.json.JsonRowDeserializationSchema
The format was developed for the Table API users and will not be maintained for
DataStream API users anymore. Either use Table API or switch to Data Stream, defining your
own
DeserializationSchema . |
org.apache.flink.formats.json.JsonRowSerializationSchema
The format was developed for the Table API users and will not be maintained for
DataStream API users anymore. Either use Table API or switch to Data Stream, defining your
own
SerializationSchema . |
org.apache.flink.state.api.KeyedOperatorTransformation |
org.apache.flink.state.api.output.partitioner.KeyGroupRangePartitioner |
org.apache.flink.streaming.api.operators.co.LegacyKeyedCoProcessOperator
Replaced by
KeyedCoProcessOperator which takes KeyedCoProcessFunction |
org.apache.flink.streaming.api.operators.LegacyKeyedProcessOperator
Replaced by
KeyedProcessOperator which takes KeyedProcessFunction |
org.apache.flink.table.types.utils.LegacyTypeInfoDataTypeConverter
Use
DataTypeFactory.createDataType(TypeInformation) instead. Note that this
method will not create legacy types anymore. It fully uses the new type system available only
in the planner. |
org.apache.flink.table.functions.LegacyUserDefinedFunctionInference |
org.apache.flink.table.dataview.ListViewSerializer |
org.apache.flink.table.dataview.ListViewSerializerSnapshot |
org.apache.flink.table.dataview.ListViewTypeInfo |
org.apache.flink.table.dataview.ListViewTypeInfoFactory |
org.apache.flink.api.java.io.LocalCollectionOutputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.LocalEnvironment
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.runtime.types.LogicalTypeDataTypeConverter |
org.apache.flink.table.planner.hint.LookupJoinHintOptions
The configurations have been deprecated as part of FLIP-457 and will be removed in
Flink 2.0. Please use
LookupJoinHintOptions
instead. |
org.apache.flink.table.catalog.ManagedTableListener
This interface will be removed soon. Please see FLIP-346 for more details.
|
org.apache.flink.api.java.operators.MapOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.operators.MapPartitionOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.dataview.MapViewSerializer |
org.apache.flink.table.dataview.MapViewSerializerSnapshot |
org.apache.flink.table.dataview.MapViewTypeInfo |
org.apache.flink.table.dataview.MapViewTypeInfoFactory |
org.apache.flink.runtime.state.memory.MemoryStateBackend |
org.apache.flink.streaming.api.functions.source.MessageAcknowledgingSourceBase
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.streaming.api.functions.source.MultipleIdsMessageAcknowledgingSourceBase
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.test.util.MultipleProgramsTestBaseJUnit4
Use
MultipleProgramsTestBase instead. |
org.apache.flink.table.runtime.groupwindow.NamedWindowProperty
The POJOs in this package are used to represent the deprecated Group Window feature.
Currently, they also used to configure Python operators.
|
org.apache.flink.state.api.NewSavepoint
For creating a new savepoint, use
SavepointWriter and the data stream api
under batch execution. |
org.apache.flink.cep.nfa.NFA.NFASerializer |
org.apache.flink.cep.pattern.conditions.NotCondition
Please use
RichNotCondition instead. This class exists just for backwards
compatibility and will be removed in FLINK-10113. |
org.apache.flink.table.dataview.NullAwareMapSerializer |
org.apache.flink.table.dataview.NullAwareMapSerializerSnapshot |
org.apache.flink.api.java.summarize.NumericColumnSummary
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.summarize.ObjectColumnSummary
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.descriptors.OldCsvValidator
Use the RFC-compliant
Csv format in the dedicated flink-formats/flink-csv
module instead. |
org.apache.flink.state.api.OneInputOperatorTransformation
Use
OneInputStateTransformation instead. |
org.apache.flink.api.java.operators.Operator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.orc.OrcColumnarRowFileInputFormat
Please use
OrcColumnarRowInputFormat . |
org.apache.flink.cep.pattern.conditions.OrCondition
Please use
RichOrCondition instead. This class exists just for backwards
compatibility and will be removed in FLINK-10113. |
org.apache.flink.streaming.api.functions.sink.OutputFormatSinkFunction
Please use the
StreamingFileSink
for writing to files from a streaming program. |
org.apache.flink.table.sinks.OutputFormatTableSink
This interface has been replaced by
DynamicTableSink . The new interface
consumes internal data structures. See FLIP-95 for more information. |
org.apache.flink.api.java.io.ParallelIteratorInputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.formats.parquet.avro.ParquetAvroWriters
use
AvroParquetWriters instead. |
org.apache.flink.api.java.operators.PartitionOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.streaming.tests.PeriodicStreamingJob.PeriodicSourceGenerator
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.api.java.io.PrimitiveInputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.io.PrintingOutputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.streaming.api.functions.sink.PrintSinkFunction
This interface will be removed in future versions. Use the new
PrintSink
interface instead. |
org.apache.flink.table.runtime.groupwindow.ProctimeAttribute
The POJOs in this package are used to represent the deprecated Group Window feature.
Currently, they also used to configure Python operators.
|
org.apache.flink.runtime.webmonitor.handlers.ProgramArgsQueryParameter
please, use
JarRequestBody.FIELD_NAME_PROGRAM_ARGUMENTS_LIST |
org.apache.flink.api.java.operators.ProjectOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.queryablestate.client.QueryableStateClient
The Queryable State feature is deprecated since Flink 1.18, and will be removed in a
future Flink major version.
|
org.apache.flink.configuration.QueryableStateOptions
The Queryable State feature is deprecated since Flink 1.18, and will be removed in a
future Flink major version.
|
org.apache.flink.streaming.api.datastream.QueryableStateStream
The Queryable State feature is deprecated since Flink 1.18, and will be removed in a
future Flink major version.
|
org.apache.flink.api.java.operators.ReduceOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.RemoteEnvironment
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.common.restartstrategy.RestartStrategies
The
RestartStrategies class is marked as deprecated because starting from
Flink 1.19, all complex Java objects related to configuration should be replaced by
ConfigOption. In a future major version of Flink, this class will be removed entirely. It is
recommended to switch to using the ConfigOptions provided by RestartStrategyOptions for configuring restart strategies
like the following code snippet:
For more details on using ConfigOption for restart strategies, please refer to the Flink
documentation: restart-strategies |
org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.streaming.api.functions.windowing.RichProcessAllWindowFunction
use
ProcessAllWindowFunction instead |
org.apache.flink.streaming.api.functions.windowing.RichProcessWindowFunction
use
ProcessWindowFunction instead |
org.apache.flink.streaming.api.functions.sink.RichSinkFunction
This interface will be removed in future versions. Use the new
Sink interface instead. |
org.apache.flink.streaming.api.functions.source.RichSourceFunction
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.contrib.streaming.state.RocksDBStateBackend |
org.apache.flink.contrib.streaming.state.RocksDBStateBackendFactory |
org.apache.flink.api.java.io.RowCsvInputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.descriptors.Rowtime
This class was used for legacy connectors using
Descriptor . |
org.apache.flink.table.runtime.groupwindow.RowtimeAttribute
The POJOs in this package are used to represent the deprecated Group Window feature.
Currently, they also used to configure Python operators.
|
org.apache.flink.table.sources.RowtimeAttributeDescriptor
This interface will not be supported in the new source design around
DynamicTableSource . Use the concept of computed columns instead. See FLIP-95 for more
information. |
org.apache.flink.table.descriptors.RowtimeValidator
See
Rowtime for details. |
org.apache.flink.state.api.Savepoint
For creating a new savepoint, use
SavepointWriter and the data stream api
under batch execution. For reading a savepoint, use SavepointReader and the data
stream api under batch execution. |
org.apache.flink.runtime.jobgraph.SavepointConfigOptions |
org.apache.flink.state.api.runtime.metadata.SavepointMetadata |
org.apache.flink.api.scala.operators.ScalaAggregateOperator
All Flink Scala APIs are deprecated and will be removed in a future Flink major
version. You can still build your application in Scala, but you should move to the Java
version of either the DataStream and/or Table API.
|
org.apache.flink.api.scala.operators.ScalaCsvOutputFormat
All Flink Scala APIs are deprecated and will be removed in a future Flink major
version. You can still build your application in Scala, but you should move to the Java
version of either the DataStream and/or Table API.
|
org.apache.flink.table.functions.ScalarFunctionDefinition
Non-legacy functions can simply omit this wrapper for declarations.
|
org.apache.flink.table.descriptors.Schema
This class was used for legacy connectors using
Descriptor . |
org.apache.flink.table.descriptors.SchemaValidator
See
Schema for details. |
org.apache.flink.streaming.tests.SequenceGeneratorSource
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.cep.nfa.SharedBuffer
everything in this class is deprecated. Those are only migration procedures from
older versions.
|
org.apache.flink.cep.nfa.sharedbuffer.SharedBufferNode.SharedBufferNodeSerializer
was used in <= 1.12, use
SharedBufferNodeSerializer instead. |
org.apache.flink.streaming.util.serialization.SimpleStringSchema
Use
SimpleStringSchema instead. |
org.apache.flink.api.java.operators.SingleInputOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.operators.SingleInputUdfOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.connector.sink2.Sink.InitContextWrapper
Internal, do not use it.
|
org.apache.flink.table.runtime.groupwindow.SliceEnd
The POJOs in this package are used to represent the deprecated Group Window feature.
Currently, they also used to configure Python operators.
|
org.apache.flink.streaming.api.functions.sink.SocketClientSink
This interface will be removed in future versions. Use the new
Sink interface instead. |
org.apache.flink.streaming.api.functions.source.SocketTextStreamFunction
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.api.java.operators.SortedGrouping
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.operators.SortPartitionOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.streaming.runtime.tasks.SourceStreamTask
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.api.java.io.SplitDataProperties
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.streaming.api.functions.source.StatefulSequenceSource
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.test.StatefulStreamingJob.MySource
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink
Use
FileSink instead. |
org.apache.flink.streaming.api.environment.StreamPipelineOptions
This option class is deprecated in 1.20 and will be removed in 2.0.
|
org.apache.flink.table.sources.tsextractors.StreamRecordTimestamp
This class will not be supported in the new source design around
DynamicTableSource . See FLIP-95 for more
information. |
org.apache.flink.streaming.api.operators.StreamSource
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.streaming.api.operators.StreamSourceContexts
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.sql.tests.StreamSQLTestProgram.Generator
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor.SynchronizedStreamTaskActionExecutor
this class should only be used in
SourceStreamTask which exposes the
checkpoint lock as part of Public API. |
org.apache.flink.api.java.summarize.StringColumnSummary
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.runtime.checkpoint.SubtaskState
Internal class for savepoint backwards compatibility. Don't use for other purposes.
|
org.apache.flink.table.functions.TableAggregateFunctionDefinition
Non-legacy functions can simply omit this wrapper for declarations.
|
org.apache.flink.table.api.TableColumn
See
ResolvedSchema and Column . |
org.apache.flink.table.factories.TableFactoryService |
org.apache.flink.table.functions.TableFunctionDefinition
Non-legacy functions can simply omit this wrapper for declarations.
|
org.apache.flink.table.api.TableSchema
This class has been deprecated as part of FLIP-164. It has been replaced by two more
dedicated classes
Schema and ResolvedSchema . Use Schema for
declaration in APIs. ResolvedSchema is offered by the framework after resolution and
validation. |
org.apache.flink.table.sinks.TableSinkBase
This class is implementing the deprecated
TableSink interface. Implement
DynamicTableSink directly instead. |
org.apache.flink.table.factories.TableSinkFactoryContextImpl |
org.apache.flink.table.factories.TableSourceFactoryContextImpl |
org.apache.flink.runtime.checkpoint.TaskState
Internal class for savepoint backwards compatibility. Don't use for other purposes.
|
org.apache.flink.api.java.io.TextInputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.connector.file.src.reader.TextLineFormat
Please use
TextLineInputFormat . |
org.apache.flink.api.java.io.TextOutputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.io.TextValueInputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.common.time.Time
Use
Duration |
org.apache.flink.streaming.api.windowing.time.Time
Use
Duration |
org.apache.flink.table.typeutils.TimeIndicatorTypeInfo
This class will be removed in future versions as it is used for the old type system.
It is recommended to use
DataTypes instead. Please make sure to use either the old or
the new type system consistently to avoid unintended behavior. See the website documentation
for more information. |
org.apache.flink.table.typeutils.TimeIntervalTypeInfo
This class will be removed in future versions as it is used for the old type system.
It is recommended to use
DataTypes instead. Please make sure to use either the old or
the new type system consistently to avoid unintended behavior. See the website documentation
for more information. |
org.apache.flink.state.api.output.TimestampAssignerWrapper |
org.apache.flink.table.sources.tsextractors.TimestampExtractor
This interface will not be supported in the new source design around
DynamicTableSource . Use the concept of computed columns instead. See FLIP-95 for more
information. |
org.apache.flink.walkthrough.common.source.TransactionSource
This class is based on the
SourceFunction API, which is due to be
removed. Use the new Source API instead. |
org.apache.flink.api.java.operators.TwoInputOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.operators.TwoInputUdfOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction
This interface will be removed in future versions. Use the new
Sink interface instead. |
org.apache.flink.table.runtime.types.TypeInfoDataTypeConverter
Use
InternalTypeInfo.of(LogicalType) instead if TypeInformation is
really required. In many cases, InternalSerializers.create(LogicalType) should be
sufficient. |
org.apache.flink.table.runtime.types.TypeInfoLogicalTypeConverter |
org.apache.flink.table.types.logical.TypeInformationRawType
Use
RawType instead. |
org.apache.flink.streaming.util.serialization.TypeInformationSerializationSchema
Use
TypeInformationSerializationSchema instead. |
org.apache.flink.table.api.Types
This class will be removed in future versions as it uses the old type system. It is
recommended to use
DataTypes instead which uses the new type system based on
instances of DataType . Please make sure to use either the old or the new type system
consistently to avoid unintended behavior. See the website documentation for more
information. |
org.apache.flink.api.java.io.TypeSerializerInputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.api.java.io.TypeSerializerOutputFormat
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.utils.TypeStringUtils
This utility is based on
TypeInformation . However, the Table & SQL API is
currently updated to use DataType s based on LogicalType s. Use LogicalTypeParser instead. |
org.apache.flink.api.java.operators.UnionOperator
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.api.constraints.UniqueConstraint
See
ResolvedSchema and UniqueConstraint . |
org.apache.flink.api.java.operators.UnsortedGrouping
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.api.WatermarkSpec
See
ResolvedSchema and WatermarkSpec . |
org.apache.flink.table.runtime.groupwindow.WindowEnd
The POJOs in this package are used to represent the deprecated Group Window feature.
Currently, they also used to configure Python operators.
|
org.apache.flink.state.api.WindowReader |
org.apache.flink.table.runtime.groupwindow.WindowReference
The POJOs in this package are used to represent the deprecated Group Window feature.
Currently, they are also used to configure Python operators.
|
org.apache.flink.table.runtime.groupwindow.WindowStart
The POJOs in this package are used to represent the deprecated Group Window feature.
Currently, they also used to configure Python operators.
|
org.apache.flink.state.api.WritableSavepoint |
org.apache.flink.streaming.api.functions.sink.WriteFormat
Please use the
StreamingFileSink
for writing to files from a streaming program. |
org.apache.flink.streaming.api.functions.sink.WriteFormatAsCsv
Please use the
StreamingFileSink
for writing to files from a streaming program. |
org.apache.flink.streaming.api.functions.sink.WriteFormatAsText
Please use the
StreamingFileSink
for writing to files from a streaming program. |
org.apache.flink.streaming.api.functions.sink.WriteSinkFunction
Please use the
StreamingFileSink
for writing to files from a streaming program. |
org.apache.flink.streaming.api.functions.sink.WriteSinkFunctionByMillis
Please use the
StreamingFileSink
for writing to files from a streaming program. |
org.apache.flink.runtime.rest.messages.YarnCancelJobTerminationHeaders
This should be removed once we can send arbitrary REST calls via the Yarn proxy.
|
org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint
Per-mode has been deprecated in Flink 1.15 and will be removed in the future. Please
use application mode instead.
|
org.apache.flink.yarn.executors.YarnJobClusterExecutor |
org.apache.flink.yarn.executors.YarnJobClusterExecutorFactory |
org.apache.flink.runtime.rest.messages.YarnStopJobTerminationHeaders
This should be removed once we can send arbitrary REST calls via the Yarn proxy.
|
Enum and Description |
---|
org.apache.flink.api.java.aggregation.Aggregations
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.streaming.api.environment.CheckpointConfig.ExternalizedCheckpointCleanup
This class has been moved to
ExternalizedCheckpointRetention . |
org.apache.flink.streaming.api.CheckpointingMode
This class has been moved to
CheckpointingMode . |
org.apache.flink.table.api.config.ExecutionConfigOptions.LegacyCastBehaviour |
org.apache.flink.api.common.ExecutionMode
The
ExecutionMode is deprecated because it's only used in DataSet APIs. All
Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a future Flink
major version. You can still build your application in DataSet, but you should move to either
the DataStream and/or Table API. |
org.apache.flink.api.common.InputDependencyConstraint
InputDependencyConstraint is not used anymore and will be deleted in one of
the future versions. It was previously used in the scheduler implementations that were
removed as part of FLINK-20589. |
org.apache.flink.api.java.operators.join.JoinType
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
org.apache.flink.table.planner.hint.LookupJoinHintOptions.RetryStrategy
This strategy has been deprecated as a part of FLIP-457 and will be removed in
Flink 2.0. Please use
LookupJoinHintOptions.RetryStrategy instead. |
org.apache.flink.contrib.streaming.state.RocksDBStateBackend.PriorityQueueStateType |
org.apache.flink.streaming.api.TimeCharacteristic
In Flink 1.12 the default stream time characteristic has been changed to
TimeCharacteristic.EventTime , thus you don't need to call this method for enabling
event-time support anymore. Explicitly using processing-time windows and timers works in
event-time mode. If you need to disable watermarks, please use ExecutionConfig.setAutoWatermarkInterval(long) . If you are using TimeCharacteristic.IngestionTime , please manually set an appropriate WatermarkStrategy . If you are using generic "time window" operations (for example KeyedStream.timeWindow(org.apache.flink.streaming.api.windowing.time.Time)
that change behaviour based on the time characteristic, please use equivalent operations that
explicitly specify processing time or event time. |
Exceptions and Description |
---|
org.apache.flink.table.api.AmbiguousTableFactoryException
This exception is considered internal and has been erroneously placed in the *.api
package. It is replaced by
AmbiguousTableFactoryException and should not be used
directly anymore. |
org.apache.flink.table.api.ExpressionParserException
This exception is considered internal and has been erroneously placed in the *.api
package. It is replaced by
ExpressionParserException and should not be used directly
anymore. |
org.apache.flink.table.api.NoMatchingTableFactoryException
This exception is considered internal and has been erroneously placed in the *.api
package. It is replaced by
NoMatchingTableFactoryException and should not be used
directly anymore. |
org.apache.flink.api.java.aggregation.UnsupportedAggregationTypeException
All Flink DataSet APIs are deprecated since Flink 1.18 and will be removed in a
future Flink major version. You can still build your application in DataSet, but you should
move to either the DataStream and/or Table API.
|
Annotation Type and Description |
---|
org.apache.flink.metrics.reporter.InstantiateViaFactory
Will be removed in a future version. Users should use all reporters as plugins.
|
org.apache.flink.metrics.reporter.InterceptInstantiationViaReflection
Will be removed in a future version. Users should use all reporters as plugins.
|
Copyright © 2014–2024 The Apache Software Foundation. All rights reserved.