The focus of this training is to broadly cover the DataStream API well enough that you will be able to get started writing streaming applications.
Flink’s DataStream APIs for Java and Scala will let you stream anything they can serialize. Flink’s own serializer is used for
and Flink falls back to Kryo for other types. It is also possible to use other serializers with Flink. Avro, in particular, is well supported.
Flink’s native serializer can operate efficiently on tuples and POJOs.
For Java, Flink defines its own Tuple0
thru Tuple25
types.
Flink recognizes a data type as a POJO type (and allows “by-name” field referencing) if the following conditions are fulfilled:
Example:
Flink’s serializer supports schema evolution for POJO types.
These work just as you’d expect.
This example takes a stream of records about people as input, and filters it to only include the adults.
Every Flink application needs an execution environment, env
in this example. Streaming
applications need to use a StreamExecutionEnvironment
.
The DataStream API calls made in your application build a job graph that is attached to the
StreamExecutionEnvironment
. When env.execute()
is called this graph is packaged up and sent to
the JobManager, which parallelizes the job and distributes slices of it to the Task Managers for
execution. Each parallel slice of your job will be executed in a task slot.
Note that if you don’t call execute(), your application won’t be run.
This distributed runtime depends on your application being serializable. It also requires that all dependencies are available to each node in the cluster.
The example above constructs a DataStream<Person>
using env.fromElements(...)
. This is a
convenient way to throw together a simple stream for use in a prototype or test. There is also a
fromCollection(Collection)
method on StreamExecutionEnvironment
. So instead, you could do this:
Another convenient way to get some data into a stream while prototyping is to use a socket
or a file
In real applications the most commonly used data sources are those that support low-latency, high throughput parallel reads in combination with rewind and replay – the prerequisites for high performance and fault tolerance – such as Apache Kafka, Kinesis, and various filesystems. REST APIs and databases are also frequently used for stream enrichment.
The example above uses adults.print()
to print its results to the task manager logs (which will
appear in your IDE’s console, when running in an IDE). This will call toString()
on each element
of the stream.
The output looks something like this
1> Fred: age 35
2> Wilma: age 35
where 1> and 2> indicate which sub-task (i.e., thread) produced the output.
In production, commonly used sinks include the StreamingFileSink, various databases, and several pub-sub systems.
In production, your application will run in a remote cluster or set of containers. And if it fails, it will fail remotely. The JobManager and TaskManager logs can be very helpful in debugging such failures, but it is much easier to do local debugging inside an IDE, which is something that Flink supports. You can set breakpoints, examine local variables, and step through your code. You can also step into Flink’s code, which can be a great way to learn more about its internals if you are curious to see how Flink works.
At this point you know enough to get started coding and running a simple DataStream application. Clone the flink-training repo, and after following the instructions in the README, do the first exercise: Filtering a Stream (Ride Cleansing).