db_cluster_edit | R Documentation |
Edit the configuration of a cluster to match the provided attributes and size.
db_cluster_edit(
cluster_id,
spark_version,
node_type_id,
num_workers = NULL,
autoscale = NULL,
name = NULL,
spark_conf = NULL,
cloud_attrs = NULL,
driver_node_type_id = NULL,
custom_tags = NULL,
init_scripts = NULL,
spark_env_vars = NULL,
autotermination_minutes = NULL,
log_conf = NULL,
ssh_public_keys = NULL,
driver_instance_pool_id = NULL,
instance_pool_id = NULL,
idempotency_token = NULL,
enable_elastic_disk = NULL,
apply_policy_default_values = NULL,
enable_local_disk_encryption = NULL,
docker_image = NULL,
policy_id = NULL,
host = db_host(),
token = db_token(),
perform_request = TRUE
)
cluster_id |
Canonical identifier for the cluster. |
spark_version |
The runtime version of the cluster. You can retrieve a
list of available runtime versions by using |
node_type_id |
The node type for the worker nodes.
|
num_workers |
Number of worker nodes that this cluster should have. A
cluster has one Spark driver and |
autoscale |
Instance of |
name |
Cluster name requested by the user. This doesn’t have to be unique. If not specified at creation, the cluster name will be an empty string. |
spark_conf |
Named list. An object containing a set of optional,
user-specified Spark configuration key-value pairs. You can also pass in a
string of extra JVM options to the driver and the executors via
|
cloud_attrs |
Attributes related to clusters running on specific cloud
provider. Defaults to |
driver_node_type_id |
The node type of the Spark driver. This field is
optional; if unset, the driver node type will be set as the same value as
|
custom_tags |
Named list. An object containing a set of tags for cluster
resources. Databricks tags all cluster resources with these tags in addition
to |
init_scripts |
Instance of |
spark_env_vars |
Named list. User-specified environment variable
key-value pairs. In order to specify an additional set of
|
autotermination_minutes |
Automatically terminates the cluster after it is inactive for this time in minutes. If not set, this cluster will not be automatically terminated. If specified, the threshold must be between 10 and 10000 minutes. You can also set this value to 0 to explicitly disable automatic termination. Defaults to 120. |
log_conf |
Instance of |
ssh_public_keys |
List. SSH public key contents that will be added to each Spark node in this cluster. The corresponding private keys can be used to login with the user name ubuntu on port 2200. Up to 10 keys can be specified. |
driver_instance_pool_id |
ID of the instance pool to use for the
driver node. You must also specify |
instance_pool_id |
ID of the instance pool to use for cluster nodes. If
|
idempotency_token |
An optional token that can be used to guarantee the idempotency of cluster creation requests. If an active cluster with the provided token already exists, the request will not create a new cluster, but it will return the ID of the existing cluster instead. The existence of a cluster with the same token is not checked against terminated clusters. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks guarantees that exactly one cluster will be launched with that idempotency token. This token should have at most 64 characters. |
enable_elastic_disk |
When enabled, this cluster will dynamically acquire additional disk space when its Spark workers are running low on disk space. |
apply_policy_default_values |
Boolean (Default: |
enable_local_disk_encryption |
Boolean (Default: |
docker_image |
Instance of |
policy_id |
String, ID of a cluster policy. |
host |
Databricks workspace URL, defaults to calling |
token |
Databricks workspace token, defaults to calling |
perform_request |
If |
You can edit a cluster if it is in a RUNNING
or TERMINATED
state. If you
edit a cluster while it is in a RUNNING
state, it will be restarted so that
the new attributes can take effect. If you edit a cluster while it is in a
TERMINATED
state, it will remain TERMINATED.
The next time it is started
using the clusters/start API, the new attributes will take effect. An attempt
to edit a cluster in any other state will be rejected with an INVALID_STATE
error code.
Clusters created by the Databricks Jobs service cannot be edited.
Other Clusters API:
db_cluster_create()
,
db_cluster_events()
,
db_cluster_get()
,
db_cluster_list()
,
db_cluster_list_node_types()
,
db_cluster_list_zones()
,
db_cluster_perm_delete()
,
db_cluster_pin()
,
db_cluster_resize()
,
db_cluster_restart()
,
db_cluster_runtime_versions()
,
db_cluster_start()
,
db_cluster_terminate()
,
db_cluster_unpin()
,
get_and_start_cluster()
,
get_latest_dbr()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.