public class YarnClusterDescriptor extends Object implements ClusterDescriptor<org.apache.hadoop.yarn.api.records.ApplicationId>
Constructor and Description |
---|
YarnClusterDescriptor(Configuration flinkConfiguration,
org.apache.hadoop.yarn.conf.YarnConfiguration yarnConfiguration,
org.apache.hadoop.yarn.client.api.YarnClient yarnClient,
YarnClusterInformationRetriever yarnClusterInformationRetriever,
boolean sharedYarnClient) |
Modifier and Type | Method and Description |
---|---|
void |
addShipFiles(List<File> shipFiles)
Adds the given files to the list of files to ship.
|
void |
close() |
ClusterClientProvider<org.apache.hadoop.yarn.api.records.ApplicationId> |
deployApplicationCluster(ClusterSpecification clusterSpecification,
ApplicationConfiguration applicationConfiguration)
Triggers deployment of an application cluster.
|
ClusterClientProvider<org.apache.hadoop.yarn.api.records.ApplicationId> |
deployJobCluster(ClusterSpecification clusterSpecification,
JobGraph jobGraph,
boolean detached)
Deploys a per-job cluster with the given job on the cluster.
|
ClusterClientProvider<org.apache.hadoop.yarn.api.records.ApplicationId> |
deploySessionCluster(ClusterSpecification clusterSpecification)
Triggers deployment of a cluster.
|
String |
getClusterDescription()
Returns a String containing details about the cluster (NodeManagers, available memory, ...).
|
Configuration |
getFlinkConfiguration() |
String |
getNodeLabel() |
org.apache.hadoop.yarn.client.api.YarnClient |
getYarnClient() |
protected String |
getYarnJobClusterEntrypoint()
The class to start the application master with.
|
protected String |
getYarnSessionClusterEntrypoint()
The class to start the application master with.
|
void |
killCluster(org.apache.hadoop.yarn.api.records.ApplicationId applicationId)
Terminates the cluster with the given cluster id.
|
static void |
logDetachedClusterInformation(org.apache.hadoop.yarn.api.records.ApplicationId yarnApplicationId,
org.slf4j.Logger logger) |
ClusterClientProvider<org.apache.hadoop.yarn.api.records.ApplicationId> |
retrieve(org.apache.hadoop.yarn.api.records.ApplicationId applicationId)
Retrieves an existing Flink Cluster.
|
void |
setLocalJarPath(org.apache.hadoop.fs.Path localJarPath) |
public YarnClusterDescriptor(Configuration flinkConfiguration, org.apache.hadoop.yarn.conf.YarnConfiguration yarnConfiguration, org.apache.hadoop.yarn.client.api.YarnClient yarnClient, YarnClusterInformationRetriever yarnClusterInformationRetriever, boolean sharedYarnClient)
public org.apache.hadoop.yarn.client.api.YarnClient getYarnClient()
protected String getYarnSessionClusterEntrypoint()
protected String getYarnJobClusterEntrypoint()
public Configuration getFlinkConfiguration()
public void setLocalJarPath(org.apache.hadoop.fs.Path localJarPath)
public void addShipFiles(List<File> shipFiles)
Note that any file matching "flink-dist*.jar" will be excluded from the upload by
YarnApplicationFileUploader.registerMultipleLocalResources(Collection, String,
LocalResourceType)
since we upload the Flink uber jar ourselves and do not need to deploy it
multiple times.
shipFiles
- files to shippublic String getNodeLabel()
public void close()
close
in interface AutoCloseable
close
in interface ClusterDescriptor<org.apache.hadoop.yarn.api.records.ApplicationId>
public ClusterClientProvider<org.apache.hadoop.yarn.api.records.ApplicationId> retrieve(org.apache.hadoop.yarn.api.records.ApplicationId applicationId) throws ClusterRetrieveException
ClusterDescriptor
retrieve
in interface ClusterDescriptor<org.apache.hadoop.yarn.api.records.ApplicationId>
applicationId
- The unique identifier of the running clusterClusterRetrieveException
- if the cluster client could not be retrievedpublic ClusterClientProvider<org.apache.hadoop.yarn.api.records.ApplicationId> deploySessionCluster(ClusterSpecification clusterSpecification) throws ClusterDeploymentException
ClusterDescriptor
deploySessionCluster
in interface ClusterDescriptor<org.apache.hadoop.yarn.api.records.ApplicationId>
clusterSpecification
- Cluster specification defining the cluster to deployClusterDeploymentException
- if the cluster could not be deployedpublic ClusterClientProvider<org.apache.hadoop.yarn.api.records.ApplicationId> deployApplicationCluster(ClusterSpecification clusterSpecification, ApplicationConfiguration applicationConfiguration) throws ClusterDeploymentException
ClusterDescriptor
main()
of the
application's user code will be executed on the cluster, rather than the client.deployApplicationCluster
in interface ClusterDescriptor<org.apache.hadoop.yarn.api.records.ApplicationId>
clusterSpecification
- Cluster specification defining the cluster to deployapplicationConfiguration
- Application-specific configuration parametersClusterDeploymentException
- if the cluster could not be deployedpublic ClusterClientProvider<org.apache.hadoop.yarn.api.records.ApplicationId> deployJobCluster(ClusterSpecification clusterSpecification, JobGraph jobGraph, boolean detached) throws ClusterDeploymentException
ClusterDescriptor
deployJobCluster
in interface ClusterDescriptor<org.apache.hadoop.yarn.api.records.ApplicationId>
clusterSpecification
- Initial cluster specification with which the Flink cluster is
launchedjobGraph
- JobGraph with which the job cluster is starteddetached
- true if the cluster should be stopped after the job completion without
serving the result, otherwise falseClusterDeploymentException
- if the cluster could not be deployedpublic void killCluster(org.apache.hadoop.yarn.api.records.ApplicationId applicationId) throws FlinkException
ClusterDescriptor
killCluster
in interface ClusterDescriptor<org.apache.hadoop.yarn.api.records.ApplicationId>
applicationId
- identifying the cluster to shut downFlinkException
- if the cluster could not be terminatedpublic String getClusterDescription()
ClusterDescriptor
getClusterDescription
in interface ClusterDescriptor<org.apache.hadoop.yarn.api.records.ApplicationId>
public static void logDetachedClusterInformation(org.apache.hadoop.yarn.api.records.ApplicationId yarnApplicationId, org.slf4j.Logger logger)
Copyright © 2014–2024 The Apache Software Foundation. All rights reserved.