@Internal public class BaseRowPythonScalarFunctionOperator extends AbstractPythonScalarFunctionOperator<BaseRow,BaseRow,BaseRow,BaseRow>
ScalarFunction
operator for the blink planner.AbstractStreamOperator.CountingOutput<OUT>
forwardedFields, forwardedInputQueue, inputType, outputType, scalarFunctions, udfInputOffsets, udfInputType, udfOutputType, udfResultQueue
chainingStrategy, latencyStats, LOG, metrics, output, timeServiceManager
Constructor and Description |
---|
BaseRowPythonScalarFunctionOperator(Configuration config,
PythonFunctionInfo[] scalarFunctions,
RowType inputType,
RowType outputType,
int[] udfInputOffsets,
int[] forwardedFields) |
Modifier and Type | Method and Description |
---|---|
void |
bufferInput(BaseRow input)
Buffers the specified input, it will be used to construct
the operator result together with the udf execution result.
|
PythonFunctionRunner<BaseRow> |
createPythonFunctionRunner(org.apache.beam.sdk.fn.data.FnDataReceiver<BaseRow> resultReceiver,
PythonEnvironmentManager pythonEnvironmentManager) |
void |
emitResults()
Sends the execution results to the downstream operator.
|
BaseRow |
getUdfInput(BaseRow element) |
void |
open()
This method is called immediately before any elements are processed, it should contain the
operator's initialization logic, e.g.
|
createPythonFunctionRunner, getPythonEnv, processElement
close, createPythonEnvironmentManager, dispose, endInput, prepareSnapshotPreBarrier, processWatermark
getChainingStrategy, getContainingTask, getCurrentKey, getExecutionConfig, getInternalTimerService, getKeyedStateBackend, getKeyedStateStore, getMetricGroup, getOperatorConfig, getOperatorID, getOperatorName, getOperatorStateBackend, getOrCreateKeyedState, getPartitionedState, getPartitionedState, getProcessingTimeService, getRuntimeContext, getUserCodeClassloader, initializeState, initializeState, notifyCheckpointComplete, numEventTimeTimers, numProcessingTimeTimers, processLatencyMarker, processLatencyMarker1, processLatencyMarker2, processWatermark1, processWatermark2, reportOrForwardLatencyMarker, setChainingStrategy, setCurrentKey, setKeyContextElement1, setKeyContextElement2, setup, snapshotState, snapshotState
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
processLatencyMarker
getChainingStrategy, getMetricGroup, getOperatorID, initializeState, setChainingStrategy, setKeyContextElement1, setKeyContextElement2, snapshotState
notifyCheckpointComplete
getCurrentKey, setCurrentKey
public BaseRowPythonScalarFunctionOperator(Configuration config, PythonFunctionInfo[] scalarFunctions, RowType inputType, RowType outputType, int[] udfInputOffsets, int[] forwardedFields)
public void open() throws Exception
AbstractStreamOperator
The default implementation does nothing.
open
in interface StreamOperator<BaseRow>
open
in class AbstractPythonScalarFunctionOperator<BaseRow,BaseRow,BaseRow,BaseRow>
Exception
- An exception in this method causes the operator to fail.public void bufferInput(BaseRow input)
AbstractPythonScalarFunctionOperator
bufferInput
in class AbstractPythonScalarFunctionOperator<BaseRow,BaseRow,BaseRow,BaseRow>
public BaseRow getUdfInput(BaseRow element)
getUdfInput
in class AbstractPythonScalarFunctionOperator<BaseRow,BaseRow,BaseRow,BaseRow>
public void emitResults()
AbstractPythonFunctionOperator
emitResults
in class AbstractPythonFunctionOperator<BaseRow,BaseRow>
public PythonFunctionRunner<BaseRow> createPythonFunctionRunner(org.apache.beam.sdk.fn.data.FnDataReceiver<BaseRow> resultReceiver, PythonEnvironmentManager pythonEnvironmentManager)
createPythonFunctionRunner
in class AbstractPythonScalarFunctionOperator<BaseRow,BaseRow,BaseRow,BaseRow>
Copyright © 2014–2020 The Apache Software Foundation. All rights reserved.