This documentation is for Apache Flink version 1.3. These pages have been built at: 02/02/22, 11:10:56 AM UTC.
Apache Flink is an open source platform for distributed stream and batch data processing. Flink’s core is a streaming dataflow engine that provides data distribution, communication, and fault tolerance for distributed computations over data streams. Flink also builds batch processing on top of the streaming engine, overlaying native iteration support, managed memory, and program optimization.
Concepts: Start with the basic concepts of Flink’s Dataflow Programming Model and Distributed Runtime Environment. This will help you to fully understand the other parts of the documentation, including the setup and programming guides. It is highly recommended to read these sections first.
Quickstarts: Run an example program on your local machine or study some examples.
Programming Guides: You can check out our guides about basic API concepts and the DataStream API or DataSet API to learn how to write your first Flink programs.
Before putting your Flink job into production, be sure to read the Production Readiness Checklist.
For users of earlier versions of Apache Flink we recommend the API migration guide. While all parts of the API that were marked as public and stable are still supported (the public API is backwards compatible), we suggest migrating applications to the newer interfaces where applicable.
For users that plan to upgrade a Flink system in production, we recommend reading the guide on upgrading Apache Flink.
Flink Forward: Talks from past conferences are available at the Flink Forward website and on YouTube. Robust Stream Processing with Apache Flink is a good place to start.
Training: The training materials from data Artisans include slides, exercises, and sample solutions.
Blogs: The Apache Flink and data Artisans blogs publish frequent, in-depth technical articles about Flink.