@Internal public class RowDataPythonScalarFunctionOperator extends AbstractRowDataPythonScalarFunctionOperator
ScalarFunction
operator for the blink planner.AbstractStatelessFunctionOperator.StreamRecordCRowWrappingCollector, AbstractStatelessFunctionOperator.StreamRecordRowDataWrappingCollector
reuseJoinedRow, rowDataWrapper
forwardedFields, scalarFunctions
bais, baisWrapper, forwardedInputQueue, inputType, outputType, userDefinedFunctionInputOffsets, userDefinedFunctionInputType, userDefinedFunctionOutputType, userDefinedFunctionResultQueue
chainingStrategy, latencyStats, LOG, metrics, output, processingTimeService
Constructor and Description |
---|
RowDataPythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
Modifier and Type | Method and Description |
---|---|
PythonFunctionRunner<RowData> |
createPythonFunctionRunner(org.apache.beam.sdk.fn.data.FnDataReceiver<byte[]> resultReceiver,
PythonEnvironmentManager pythonEnvironmentManager,
Map<String,String> jobOptions) |
void |
emitResults()
Sends the execution results to the downstream operator.
|
void |
open()
This method is called immediately before any elements are processed, it should contain the
operator's initialization logic, e.g.
|
bufferInput, getFunctionInput
getPythonEnv
createPythonFunctionRunner, processElement
close, createPythonEnvironmentManager, dispose, endInput, getFlinkMetricContainer, getPythonConfig, prepareSnapshotPreBarrier, processWatermark
getChainingStrategy, getContainingTask, getCurrentKey, getExecutionConfig, getInternalTimerService, getKeyedStateBackend, getKeyedStateStore, getMetricGroup, getOperatorConfig, getOperatorID, getOperatorName, getOperatorStateBackend, getOrCreateKeyedState, getPartitionedState, getPartitionedState, getProcessingTimeService, getRuntimeContext, getTimeServiceManager, getUserCodeClassloader, initializeState, initializeState, isUsingCustomRawKeyedState, notifyCheckpointAborted, notifyCheckpointComplete, numEventTimeTimers, numProcessingTimeTimers, processLatencyMarker, processLatencyMarker1, processLatencyMarker2, processWatermark1, processWatermark2, reportOrForwardLatencyMarker, setChainingStrategy, setCurrentKey, setKeyContextElement1, setKeyContextElement2, setProcessingTimeService, setup, snapshotState, snapshotState
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
processLatencyMarker
getMetricGroup, getOperatorID, initializeState, setKeyContextElement1, setKeyContextElement2, snapshotState
notifyCheckpointAborted, notifyCheckpointComplete
getCurrentKey, setCurrentKey
public RowDataPythonScalarFunctionOperator(Configuration config, PythonFunctionInfo[] scalarFunctions, RowType inputType, RowType outputType, int[] udfInputOffsets, int[] forwardedFields)
public void open() throws Exception
AbstractStreamOperator
The default implementation does nothing.
open
in interface StreamOperator<RowData>
open
in class AbstractRowDataPythonScalarFunctionOperator
Exception
- An exception in this method causes the operator to fail.public void emitResults() throws IOException
AbstractPythonFunctionOperator
emitResults
in class AbstractPythonFunctionOperator<RowData,RowData>
IOException
public PythonFunctionRunner<RowData> createPythonFunctionRunner(org.apache.beam.sdk.fn.data.FnDataReceiver<byte[]> resultReceiver, PythonEnvironmentManager pythonEnvironmentManager, Map<String,String> jobOptions)
createPythonFunctionRunner
in class AbstractStatelessFunctionOperator<RowData,RowData,RowData>
Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.