@Internal public class ArrowPythonScalarFunctionOperator extends AbstractRowPythonScalarFunctionOperator
ScalarFunction
operator for the old planner.cRowWrapper
forwardedFields, scalarFunctions
bais, baisWrapper, baos, baosWrapper, forwardedInputQueue, inputType, outputType, userDefinedFunctionInputOffsets, userDefinedFunctionInputType, userDefinedFunctionOutputType
elementCount, maxBundleSize, pythonFunctionRunner
chainingStrategy, latencyStats, LOG, metrics, output, processingTimeService
Constructor and Description |
---|
ArrowPythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
Modifier and Type | Method and Description |
---|---|
void |
close()
This method is called after all records have been added to the operators via the methods
Input.processElement(StreamRecord) , or TwoInputStreamOperator.processElement1(StreamRecord) and TwoInputStreamOperator.processElement2(StreamRecord) . |
void |
dispose()
This method is called at the very end of the operator's life, both in the case of a
successful completion of the operation, and in the case of a failure and canceling.
|
void |
emitResult(Tuple2<byte[],Integer> resultTuple)
Sends the execution result to the downstream operator.
|
void |
endInput()
It is notified that no more data will arrive on the input.
|
String |
getInputOutputCoderUrn() |
protected void |
invokeFinishBundle() |
void |
open()
This method is called immediately before any elements are processed, it should contain the
operator's initialization logic, e.g.
|
void |
processElementInternal(org.apache.flink.table.runtime.types.CRow value) |
bufferInput, getFunctionInput
getFunctionUrn, getPythonEnv, getUserDefinedFunctionsProto
createPythonFunctionRunner, processElement
checkInvokeFinishBundleByCount, createPythonEnvironmentManager, emitResults, getConfig, getFlinkMetricContainer, getPythonConfig, isBundleFinished, prepareSnapshotPreBarrier, processWatermark, setPythonConfig
getChainingStrategy, getContainingTask, getCurrentKey, getExecutionConfig, getInternalTimerService, getKeyedStateBackend, getKeyedStateStore, getMetricGroup, getOperatorConfig, getOperatorID, getOperatorName, getOperatorStateBackend, getOrCreateKeyedState, getPartitionedState, getPartitionedState, getProcessingTimeService, getRuntimeContext, getTimeServiceManager, getUserCodeClassloader, initializeState, initializeState, isUsingCustomRawKeyedState, notifyCheckpointAborted, notifyCheckpointComplete, processLatencyMarker, processLatencyMarker1, processLatencyMarker2, processWatermark1, processWatermark2, reportOrForwardLatencyMarker, setChainingStrategy, setCurrentKey, setKeyContextElement1, setKeyContextElement2, setProcessingTimeService, setup, snapshotState, snapshotState
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
setKeyContextElement
getMetricGroup, getOperatorID, initializeState, prepareSnapshotPreBarrier, setKeyContextElement1, setKeyContextElement2, snapshotState
notifyCheckpointAborted, notifyCheckpointComplete
getCurrentKey, setCurrentKey
processLatencyMarker, processWatermark
public ArrowPythonScalarFunctionOperator(Configuration config, PythonFunctionInfo[] scalarFunctions, RowType inputType, RowType outputType, int[] udfInputOffsets, int[] forwardedFields)
public void open() throws Exception
AbstractStreamOperator
The default implementation does nothing.
open
in interface StreamOperator<org.apache.flink.table.runtime.types.CRow>
open
in class AbstractRowPythonScalarFunctionOperator
Exception
- An exception in this method causes the operator to fail.protected void invokeFinishBundle() throws Exception
invokeFinishBundle
in class AbstractPythonFunctionOperator<org.apache.flink.table.runtime.types.CRow>
Exception
public void dispose() throws Exception
AbstractStreamOperator
This method is expected to make a thorough effort to release all resources that the operator has acquired.
dispose
in interface StreamOperator<org.apache.flink.table.runtime.types.CRow>
dispose
in interface Disposable
dispose
in class AbstractPythonFunctionOperator<org.apache.flink.table.runtime.types.CRow>
Exception
- if something goes wrong during disposal.public void close() throws Exception
AbstractStreamOperator
Input.processElement(StreamRecord)
, or TwoInputStreamOperator.processElement1(StreamRecord)
and TwoInputStreamOperator.processElement2(StreamRecord)
.
The method is expected to flush all remaining buffered data. Exceptions during this flushing of buffered should be propagated, in order to cause the operation to be recognized asa failed, because the last data items are not processed properly.
close
in interface StreamOperator<org.apache.flink.table.runtime.types.CRow>
close
in class AbstractPythonFunctionOperator<org.apache.flink.table.runtime.types.CRow>
Exception
- An exception in this method causes the operator to fail.public void endInput() throws Exception
BoundedOneInput
endInput
in interface BoundedOneInput
endInput
in class AbstractOneInputPythonFunctionOperator<org.apache.flink.table.runtime.types.CRow,org.apache.flink.table.runtime.types.CRow>
Exception
public void emitResult(Tuple2<byte[],Integer> resultTuple) throws Exception
AbstractPythonFunctionOperator
emitResult
in class AbstractPythonFunctionOperator<org.apache.flink.table.runtime.types.CRow>
Exception
public String getInputOutputCoderUrn()
getInputOutputCoderUrn
in class AbstractPythonScalarFunctionOperator<org.apache.flink.table.runtime.types.CRow,org.apache.flink.table.runtime.types.CRow,Row>
public void processElementInternal(org.apache.flink.table.runtime.types.CRow value) throws Exception
processElementInternal
in class AbstractStatelessFunctionOperator<org.apache.flink.table.runtime.types.CRow,org.apache.flink.table.runtime.types.CRow,Row>
Exception
Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.