View source: R/data-structures.R
new_cluster | R Documentation |
New Cluster
new_cluster(
num_workers,
spark_version,
node_type_id,
driver_node_type_id = NULL,
autoscale = NULL,
cloud_attrs = NULL,
spark_conf = NULL,
spark_env_vars = NULL,
custom_tags = NULL,
ssh_public_keys = NULL,
log_conf = NULL,
init_scripts = NULL,
enable_elastic_disk = TRUE,
driver_instance_pool_id = NULL,
instance_pool_id = NULL
)
num_workers |
Number of worker nodes that this cluster should have. A
cluster has one Spark driver and |
spark_version |
The runtime version of the cluster. You can retrieve a
list of available runtime versions by using |
node_type_id |
The node type for the worker nodes.
|
driver_node_type_id |
The node type of the Spark driver. This field is
optional; if unset, the driver node type will be set as the same value as
|
autoscale |
Instance of |
cloud_attrs |
Attributes related to clusters running on specific cloud
provider. Defaults to |
spark_conf |
Named list. An object containing a set of optional,
user-specified Spark configuration key-value pairs. You can also pass in a
string of extra JVM options to the driver and the executors via
|
spark_env_vars |
Named list. User-specified environment variable
key-value pairs. In order to specify an additional set of
|
custom_tags |
Named list. An object containing a set of tags for cluster
resources. Databricks tags all cluster resources with these tags in addition
to |
ssh_public_keys |
List. SSH public key contents that will be added to each Spark node in this cluster. The corresponding private keys can be used to login with the user name ubuntu on port 2200. Up to 10 keys can be specified. |
log_conf |
Instance of |
init_scripts |
Instance of |
enable_elastic_disk |
When enabled, this cluster will dynamically acquire additional disk space when its Spark workers are running low on disk space. |
driver_instance_pool_id |
ID of the instance pool to use for the
driver node. You must also specify |
instance_pool_id |
ID of the instance pool to use for cluster nodes. If
|
job_task()
Other Task Objects:
email_notifications()
,
libraries()
,
notebook_task()
,
pipeline_task()
,
python_wheel_task()
,
spark_jar_task()
,
spark_python_task()
,
spark_submit_task()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.