db_jobs_run_now: Trigger A New Job Run

View source: R/jobs.R

db_jobs_run_nowR Documentation

Trigger A New Job Run

Description

Trigger A New Job Run

Usage

db_jobs_run_now(
  job_id,
  jar_params = list(),
  notebook_params = list(),
  python_params = list(),
  spark_submit_params = list(),
  host = db_host(),
  token = db_token(),
  perform_request = TRUE
)

Arguments

job_id

The canonical identifier of the job.

jar_params

Named list. Parameters are used to invoke the main function of the main class specified in the Spark JAR task. If not specified upon run-now, it defaults to an empty list. jar_params cannot be specified in conjunction with notebook_params.

notebook_params

Named list. Parameters is passed to the notebook and is accessible through the dbutils.widgets.get function. If not specified upon run-now, the triggered run uses the job’s base parameters.

python_params

Named list. Parameters are passed to Python file as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting.

spark_submit_params

Named list. Parameters are passed to spark-submit script as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting.

host

Databricks workspace URL, defaults to calling db_host().

token

Databricks workspace token, defaults to calling db_token().

perform_request

If TRUE (default) the request is performed, if FALSE the httr2 request is returned without being performed.

Details

  • ⁠*_params⁠ parameters cannot exceed 10,000 bytes when serialized to JSON.

  • jar_params and notebook_params are mutually exclusive.

See Also

Other Jobs API: db_jobs_create(), db_jobs_delete(), db_jobs_get(), db_jobs_list(), db_jobs_reset(), db_jobs_runs_cancel(), db_jobs_runs_delete(), db_jobs_runs_export(), db_jobs_runs_get(), db_jobs_runs_get_output(), db_jobs_runs_list(), db_jobs_runs_submit(), db_jobs_update()


brickster documentation built on April 12, 2025, 1:21 a.m.