Modifier and Type | Method and Description |
---|---|
protected Row |
RowCsvInputFormat.fillRecord(Row reuse,
Object[] parsedValues) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<Row> |
RowCsvInputFormat.getProducedType() |
Modifier and Type | Method and Description |
---|---|
protected Row |
RowCsvInputFormat.fillRecord(Row reuse,
Object[] parsedValues) |
Modifier and Type | Method and Description |
---|---|
Row |
JDBCInputFormat.nextRecord(Row row)
Stores the next resultSet row in a tuple
|
Modifier and Type | Method and Description |
---|---|
Row |
JDBCInputFormat.nextRecord(Row row)
Stores the next resultSet row in a tuple
|
void |
JDBCOutputFormat.writeRecord(Row row)
Adds a record to the prepared statement.
|
Modifier and Type | Method and Description |
---|---|
TypeComparator<Row> |
RowTypeInfo.createComparator(int[] logicalKeyFields,
boolean[] orders,
int logicalFieldOffset,
ExecutionConfig config) |
TypeSerializer<Row> |
RowTypeInfo.createSerializer(ExecutionConfig config) |
protected CompositeType.TypeComparatorBuilder<Row> |
RowTypeInfo.createTypeComparatorBuilder() |
Modifier and Type | Method and Description |
---|---|
Row |
RowSerializer.copy(Row from) |
Row |
RowSerializer.copy(Row from,
Row reuse) |
Row |
RowSerializer.createInstance() |
Row |
RowSerializer.deserialize(DataInputView source) |
Row |
RowSerializer.deserialize(Row reuse,
DataInputView source) |
Row |
RowComparator.readWithKeyDenormalization(Row reuse,
DataInputView source) |
Modifier and Type | Method and Description |
---|---|
TypeSerializer<Row> |
RowSerializer.duplicate() |
TypeComparator<Row> |
RowComparator.duplicate() |
Modifier and Type | Method and Description |
---|---|
int |
RowComparator.compare(Row first,
Row second) |
Row |
RowSerializer.copy(Row from) |
Row |
RowSerializer.copy(Row from,
Row reuse) |
Row |
RowSerializer.deserialize(Row reuse,
DataInputView source) |
boolean |
RowComparator.equalToReference(Row candidate) |
int |
RowComparator.hash(Row record) |
void |
RowComparator.putNormalizedKey(Row record,
MemorySegment target,
int offset,
int numBytes) |
Row |
RowComparator.readWithKeyDenormalization(Row reuse,
DataInputView source) |
void |
RowSerializer.serialize(Row record,
DataOutputView target) |
void |
RowComparator.setReference(Row toCompare) |
static void |
NullMaskUtils.writeNullMask(int len,
Row value,
DataOutputView target) |
void |
RowComparator.writeWithKeyNormalization(Row record,
DataOutputView target) |
Modifier and Type | Method and Description |
---|---|
int |
RowComparator.compareToReference(TypeComparator<Row> referencedComparator) |
Modifier and Type | Field and Description |
---|---|
protected KafkaPartitioner<Row> |
KafkaTableSink.partitioner |
protected SerializationSchema<Row> |
KafkaTableSink.serializationSchema |
Modifier and Type | Method and Description |
---|---|
protected abstract FlinkKafkaProducerBase<Row> |
KafkaTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
KafkaPartitioner<Row> partitioner)
Returns the version-specifid Kafka producer.
|
protected FlinkKafkaProducerBase<Row> |
Kafka09JsonTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
KafkaPartitioner<Row> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka08JsonTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
KafkaPartitioner<Row> partitioner) |
protected abstract SerializationSchema<Row> |
KafkaTableSink.createSerializationSchema(String[] fieldNames)
Create serialization schema for converting table rows into bytes.
|
protected SerializationSchema<Row> |
KafkaJsonTableSink.createSerializationSchema(String[] fieldNames) |
DataStream<Row> |
KafkaTableSource.getDataStream(StreamExecutionEnvironment env)
NOTE: This method is for internal use only for defining a TableSource.
|
protected DeserializationSchema<Row> |
KafkaTableSource.getDeserializationSchema()
Returns the deserialization schema.
|
TypeInformation<Row> |
KafkaTableSink.getOutputType() |
TypeInformation<Row> |
KafkaTableSource.getReturnType() |
Modifier and Type | Method and Description |
---|---|
protected abstract FlinkKafkaProducerBase<Row> |
KafkaTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
KafkaPartitioner<Row> partitioner)
Returns the version-specifid Kafka producer.
|
protected abstract FlinkKafkaProducerBase<Row> |
KafkaTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
KafkaPartitioner<Row> partitioner)
Returns the version-specifid Kafka producer.
|
protected FlinkKafkaProducerBase<Row> |
Kafka09JsonTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
KafkaPartitioner<Row> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka09JsonTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
KafkaPartitioner<Row> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka08JsonTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
KafkaPartitioner<Row> partitioner) |
protected FlinkKafkaProducerBase<Row> |
Kafka08JsonTableSink.createKafkaProducer(String topic,
Properties properties,
SerializationSchema<Row> serializationSchema,
KafkaPartitioner<Row> partitioner) |
void |
KafkaTableSink.emitDataStream(DataStream<Row> dataStream) |
Constructor and Description |
---|
Kafka010TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
String[] fieldNames,
Class<?>[] fieldTypes)
Creates a Kafka 0.10
StreamTableSource . |
Kafka010TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
String[] fieldNames,
TypeInformation<?>[] fieldTypes)
Creates a Kafka 0.10
StreamTableSource . |
Kafka08JsonTableSink(String topic,
Properties properties,
KafkaPartitioner<Row> partitioner)
Creates
KafkaTableSink for Kafka 0.8 |
Kafka08TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
String[] fieldNames,
Class<?>[] fieldTypes)
Creates a Kafka 0.8
StreamTableSource . |
Kafka08TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
String[] fieldNames,
TypeInformation<?>[] fieldTypes)
Creates a Kafka 0.8
StreamTableSource . |
Kafka09JsonTableSink(String topic,
Properties properties,
KafkaPartitioner<Row> partitioner)
Creates
KafkaTableSink for Kafka 0.9 |
Kafka09TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
String[] fieldNames,
Class<?>[] fieldTypes)
Creates a Kafka 0.9
StreamTableSource . |
Kafka09TableSource(String topic,
Properties properties,
DeserializationSchema<Row> deserializationSchema,
String[] fieldNames,
TypeInformation<?>[] fieldTypes)
Creates a Kafka 0.9
StreamTableSource . |
KafkaJsonTableSink(String topic,
Properties properties,
KafkaPartitioner<Row> partitioner)
Creates KafkaJsonTableSink
|
KafkaTableSink(String topic,
Properties properties,
KafkaPartitioner<Row> partitioner)
Creates KafkaTableSink
|
Modifier and Type | Method and Description |
---|---|
Row |
JsonRowDeserializationSchema.deserialize(byte[] message) |
Modifier and Type | Method and Description |
---|---|
TypeInformation<Row> |
JsonRowDeserializationSchema.getProducedType() |
Modifier and Type | Method and Description |
---|---|
boolean |
JsonRowDeserializationSchema.isEndOfStream(Row nextElement) |
byte[] |
JsonRowSerializationSchema.serialize(Row row) |
Modifier and Type | Method and Description |
---|---|
protected Row |
AggregateReduceGroupFunction.aggregateBuffer() |
Row |
AggregateReduceCombineFunction.combine(Iterable<Row> records)
For sub-grouped intermediate aggregate Rows, merge all of them into aggregate buffer,
|
Row |
IncrementalAggregateReduceFunction.reduce(Row value1,
Row value2)
For Incremental intermediate aggregate Rows, merge value1 and value2
into aggregate buffer, return aggregate buffer.
|
Modifier and Type | Method and Description |
---|---|
static RichGroupReduceFunction<Row,Row> |
AggregateUtil.createAggregateGroupReduceFunction(scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings)
Create a
GroupReduceFunction to compute aggregates. |
static RichGroupReduceFunction<Row,Row> |
AggregateUtil.createAggregateGroupReduceFunction(scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings)
Create a
GroupReduceFunction to compute aggregates. |
RichGroupReduceFunction<Row,Row> |
AggregateUtil$.createAggregateGroupReduceFunction(scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings)
Create a
GroupReduceFunction to compute aggregates. |
RichGroupReduceFunction<Row,Row> |
AggregateUtil$.createAggregateGroupReduceFunction(scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings)
Create a
GroupReduceFunction to compute aggregates. |
static AllWindowFunction<Row,Row,Window> |
AggregateUtil.createAllWindowAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create an
AllWindowFunction to compute non-partitioned group window aggregates. |
static AllWindowFunction<Row,Row,Window> |
AggregateUtil.createAllWindowAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create an
AllWindowFunction to compute non-partitioned group window aggregates. |
AllWindowFunction<Row,Row,Window> |
AggregateUtil$.createAllWindowAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create an
AllWindowFunction to compute non-partitioned group window aggregates. |
AllWindowFunction<Row,Row,Window> |
AggregateUtil$.createAllWindowAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create an
AllWindowFunction to compute non-partitioned group window aggregates. |
static AllWindowFunction<Row,Row,Window> |
AggregateUtil.createAllWindowIncrementalAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create an
AllWindowFunction to finalize incrementally pre-computed non-partitioned
window aggreagtes. |
static AllWindowFunction<Row,Row,Window> |
AggregateUtil.createAllWindowIncrementalAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create an
AllWindowFunction to finalize incrementally pre-computed non-partitioned
window aggreagtes. |
AllWindowFunction<Row,Row,Window> |
AggregateUtil$.createAllWindowIncrementalAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create an
AllWindowFunction to finalize incrementally pre-computed non-partitioned
window aggreagtes. |
AllWindowFunction<Row,Row,Window> |
AggregateUtil$.createAllWindowIncrementalAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create an
AllWindowFunction to finalize incrementally pre-computed non-partitioned
window aggreagtes. |
static MapFunction<Object,Row> |
AggregateUtil.createPrepareMapFunction(scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
org.apache.calcite.rel.type.RelDataType inputType)
Create a
MapFunction that prepares for aggregates. |
MapFunction<Object,Row> |
AggregateUtil$.createPrepareMapFunction(scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
int[] groupings,
org.apache.calcite.rel.type.RelDataType inputType)
Create a
MapFunction that prepares for aggregates. |
static WindowFunction<Row,Row,Tuple,Window> |
AggregateUtil.createWindowAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create a
WindowFunction to compute partitioned group window aggregates. |
static WindowFunction<Row,Row,Tuple,Window> |
AggregateUtil.createWindowAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create a
WindowFunction to compute partitioned group window aggregates. |
WindowFunction<Row,Row,Tuple,Window> |
AggregateUtil$.createWindowAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create a
WindowFunction to compute partitioned group window aggregates. |
WindowFunction<Row,Row,Tuple,Window> |
AggregateUtil$.createWindowAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create a
WindowFunction to compute partitioned group window aggregates. |
static WindowFunction<Row,Row,Tuple,Window> |
AggregateUtil.createWindowIncrementalAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create a
WindowFunction to finalize incrementally pre-computed window aggregates. |
static WindowFunction<Row,Row,Tuple,Window> |
AggregateUtil.createWindowIncrementalAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create a
WindowFunction to finalize incrementally pre-computed window aggregates. |
WindowFunction<Row,Row,Tuple,Window> |
AggregateUtil$.createWindowIncrementalAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create a
WindowFunction to finalize incrementally pre-computed window aggregates. |
WindowFunction<Row,Row,Tuple,Window> |
AggregateUtil$.createWindowIncrementalAggregationFunction(LogicalWindow window,
scala.collection.Seq<org.apache.calcite.util.Pair<org.apache.calcite.rel.core.AggregateCall,String>> namedAggregates,
org.apache.calcite.rel.type.RelDataType inputType,
org.apache.calcite.rel.type.RelDataType outputType,
int[] groupings,
scala.collection.Seq<FlinkRelBuilder.NamedWindowProperty> properties)
Create a
WindowFunction to finalize incrementally pre-computed window aggregates. |
Collector<Row> |
TimeWindowPropertyCollector.wrappedCollector() |
Modifier and Type | Method and Description |
---|---|
void |
TimeWindowPropertyCollector.collect(Row record) |
Object |
LongAvgAggregate.doEvaluate(Row buffer) |
Object |
ShortAvgAggregate.doEvaluate(Row buffer) |
Object |
IntAvgAggregate.doEvaluate(Row buffer) |
Object |
FloatAvgAggregate.doEvaluate(Row buffer) |
abstract Object |
IntegralAvgAggregate.doEvaluate(Row buffer) |
Object |
ByteAvgAggregate.doEvaluate(Row buffer) |
Object |
DoubleAvgAggregate.doEvaluate(Row buffer) |
abstract Object |
FloatingAvgAggregate.doEvaluate(Row buffer) |
void |
LongAvgAggregate.doPrepare(Object value,
Row partial) |
void |
ShortAvgAggregate.doPrepare(Object value,
Row partial) |
void |
IntAvgAggregate.doPrepare(Object value,
Row partial) |
void |
FloatAvgAggregate.doPrepare(Object value,
Row partial) |
abstract void |
IntegralAvgAggregate.doPrepare(Object value,
Row partial) |
void |
ByteAvgAggregate.doPrepare(Object value,
Row partial) |
void |
DoubleAvgAggregate.doPrepare(Object value,
Row partial) |
abstract void |
FloatingAvgAggregate.doPrepare(Object value,
Row partial) |
T |
MinAggregate.evaluate(Row buffer)
Return the final aggregated result based on aggregate buffer.
|
T |
MaxAggregate.evaluate(Row buffer)
Return the final aggregated result based on aggregate buffer.
|
T |
SumAggregate.evaluate(Row buffer) |
BigDecimal |
DecimalSumAggregate.evaluate(Row buffer) |
BigDecimal |
DecimalAvgAggregate.evaluate(Row buffer) |
BigDecimal |
DecimalMinAggregate.evaluate(Row buffer) |
T |
IntegralAvgAggregate.evaluate(Row buffer) |
BigDecimal |
DecimalMaxAggregate.evaluate(Row buffer) |
T |
FloatingAvgAggregate.evaluate(Row buffer) |
T |
Aggregate.evaluate(Row buffer)
Calculate the final aggregated result based on aggregate buffer.
|
long |
CountAggregate.evaluate(Row buffer) |
void |
MinAggregate.initiate(Row intermediate)
Initiate the intermediate aggregate value in Row.
|
void |
MaxAggregate.initiate(Row intermediate)
Initiate the intermediate aggregate value in Row.
|
void |
LongAvgAggregate.initiate(Row partial) |
void |
SumAggregate.initiate(Row partial) |
void |
DecimalSumAggregate.initiate(Row partial) |
void |
DecimalAvgAggregate.initiate(Row partial) |
void |
DecimalMinAggregate.initiate(Row intermediate) |
void |
IntegralAvgAggregate.initiate(Row partial) |
void |
DecimalMaxAggregate.initiate(Row intermediate) |
void |
FloatingAvgAggregate.initiate(Row partial) |
void |
Aggregate.initiate(Row intermediate)
Initiate the intermediate aggregate value in Row.
|
void |
CountAggregate.initiate(Row intermediate) |
void |
MinAggregate.merge(Row partial,
Row buffer)
Accessed in CombineFunction and GroupReduceFunction, merge partial
aggregate result into aggregate buffer.
|
void |
MaxAggregate.merge(Row intermediate,
Row buffer)
Accessed in CombineFunction and GroupReduceFunction, merge partial
aggregate result into aggregate buffer.
|
void |
LongAvgAggregate.merge(Row partial,
Row buffer) |
void |
SumAggregate.merge(Row partial1,
Row buffer) |
void |
DecimalSumAggregate.merge(Row partial1,
Row buffer) |
void |
DecimalAvgAggregate.merge(Row partial,
Row buffer) |
void |
DecimalMinAggregate.merge(Row partial,
Row buffer) |
void |
IntegralAvgAggregate.merge(Row partial,
Row buffer) |
void |
DecimalMaxAggregate.merge(Row partial,
Row buffer) |
void |
FloatingAvgAggregate.merge(Row partial,
Row buffer) |
void |
Aggregate.merge(Row intermediate,
Row buffer)
Merge intermediate aggregate data into aggregate buffer.
|
void |
CountAggregate.merge(Row intermediate,
Row buffer) |
void |
MinAggregate.prepare(Object value,
Row partial)
Accessed in MapFunction, prepare the input of partial aggregate.
|
void |
MaxAggregate.prepare(Object value,
Row intermediate)
Accessed in MapFunction, prepare the input of partial aggregate.
|
void |
LongAvgAggregate.prepare(Object value,
Row partial) |
void |
SumAggregate.prepare(Object value,
Row partial) |
void |
DecimalSumAggregate.prepare(Object value,
Row partial) |
void |
DecimalAvgAggregate.prepare(Object value,
Row partial) |
void |
DecimalMinAggregate.prepare(Object value,
Row partial) |
void |
IntegralAvgAggregate.prepare(Object value,
Row partial) |
void |
DecimalMaxAggregate.prepare(Object value,
Row partial) |
void |
FloatingAvgAggregate.prepare(Object value,
Row partial) |
void |
Aggregate.prepare(Object value,
Row intermediate)
Transform the aggregate field value into intermediate aggregate data.
|
void |
CountAggregate.prepare(Object value,
Row intermediate) |
Row |
IncrementalAggregateReduceFunction.reduce(Row value1,
Row value2)
For Incremental intermediate aggregate Rows, merge value1 and value2
into aggregate buffer, return aggregate buffer.
|
Modifier and Type | Method and Description |
---|---|
void |
AggregateAllTimeWindowFunction.apply(TimeWindow window,
Iterable<Row> input,
Collector<Row> out) |
void |
AggregateAllTimeWindowFunction.apply(TimeWindow window,
Iterable<Row> input,
Collector<Row> out) |
void |
IncrementalAggregateAllTimeWindowFunction.apply(TimeWindow window,
Iterable<Row> records,
Collector<Row> out) |
void |
IncrementalAggregateAllTimeWindowFunction.apply(TimeWindow window,
Iterable<Row> records,
Collector<Row> out) |
void |
IncrementalAggregateTimeWindowFunction.apply(Tuple key,
TimeWindow window,
Iterable<Row> records,
Collector<Row> out) |
void |
IncrementalAggregateTimeWindowFunction.apply(Tuple key,
TimeWindow window,
Iterable<Row> records,
Collector<Row> out) |
void |
AggregateTimeWindowFunction.apply(Tuple key,
TimeWindow window,
Iterable<Row> input,
Collector<Row> out) |
void |
AggregateTimeWindowFunction.apply(Tuple key,
TimeWindow window,
Iterable<Row> input,
Collector<Row> out) |
void |
IncrementalAggregateWindowFunction.apply(Tuple key,
W window,
Iterable<Row> records,
Collector<Row> out)
Calculate aggregated values output by aggregate buffer, and set them into output
Row based on the mapping relation between intermediate aggregate data and output data.
|
void |
IncrementalAggregateWindowFunction.apply(Tuple key,
W window,
Iterable<Row> records,
Collector<Row> out)
Calculate aggregated values output by aggregate buffer, and set them into output
Row based on the mapping relation between intermediate aggregate data and output data.
|
void |
AggregateWindowFunction.apply(Tuple key,
W window,
Iterable<Row> input,
Collector<Row> out) |
void |
AggregateWindowFunction.apply(Tuple key,
W window,
Iterable<Row> input,
Collector<Row> out) |
void |
IncrementalAggregateAllWindowFunction.apply(W window,
Iterable<Row> records,
Collector<Row> out)
Calculate aggregated values output by aggregate buffer, and set them into output
Row based on the mapping relation between intermediate aggregate data and output data.
|
void |
IncrementalAggregateAllWindowFunction.apply(W window,
Iterable<Row> records,
Collector<Row> out)
Calculate aggregated values output by aggregate buffer, and set them into output
Row based on the mapping relation between intermediate aggregate data and output data.
|
void |
AggregateAllWindowFunction.apply(W window,
Iterable<Row> input,
Collector<Row> out) |
void |
AggregateAllWindowFunction.apply(W window,
Iterable<Row> input,
Collector<Row> out) |
Row |
AggregateReduceCombineFunction.combine(Iterable<Row> records)
For sub-grouped intermediate aggregate Rows, merge all of them into aggregate buffer,
|
void |
AggregateReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out)
For grouped intermediate aggregate Rows, merge all of them into aggregate buffer,
calculate aggregated values output by aggregate buffer, and set them into output
Row based on the mapping relation between intermediate aggregate data and output data.
|
void |
AggregateReduceGroupFunction.reduce(Iterable<Row> records,
Collector<Row> out)
For grouped intermediate aggregate Rows, merge all of them into aggregate buffer,
calculate aggregated values output by aggregate buffer, and set them into output
Row based on the mapping relation between intermediate aggregate data and output data.
|
Modifier and Type | Method and Description |
---|---|
protected TableSinkBase<Row> |
CsvTableSink.copy() |
TypeInformation<Row> |
CsvTableSink.getOutputType() |
Modifier and Type | Method and Description |
---|---|
String |
CsvFormatter.map(Row row) |
Modifier and Type | Method and Description |
---|---|
void |
CsvTableSink.emitDataSet(DataSet<Row> dataSet) |
void |
CsvTableSink.emitDataStream(DataStream<Row> dataStream) |
Modifier and Type | Method and Description |
---|---|
DataSet<Row> |
CsvTableSource.getDataSet(ExecutionEnvironment execEnv)
Returns the data of the table as a
DataSet of Row . |
DataStream<Row> |
CsvTableSource.getDataStream(StreamExecutionEnvironment streamExecEnv)
Returns the data of the table as a
DataStream of Row . |
Copyright © 2014–2017 The Apache Software Foundation. All rights reserved.