db_jobs_run_now | R Documentation |
Trigger A New Job Run
db_jobs_run_now(
job_id,
jar_params = list(),
notebook_params = list(),
python_params = list(),
spark_submit_params = list(),
host = db_host(),
token = db_token(),
perform_request = TRUE
)
job_id |
The canonical identifier of the job. |
jar_params |
Named list. Parameters are used to invoke the main
function of the main class specified in the Spark JAR task. If not specified
upon run-now, it defaults to an empty list. |
notebook_params |
Named list. Parameters is passed to the notebook
and is accessible through the |
python_params |
Named list. Parameters are passed to Python file as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. |
spark_submit_params |
Named list. Parameters are passed to spark-submit script as command-line parameters. If specified upon run-now, it would overwrite the parameters specified in job setting. |
host |
Databricks workspace URL, defaults to calling |
token |
Databricks workspace token, defaults to calling |
perform_request |
If |
*_params
parameters cannot exceed 10,000 bytes when serialized to JSON.
jar_params
and notebook_params
are mutually exclusive.
Other Jobs API:
db_jobs_create()
,
db_jobs_delete()
,
db_jobs_get()
,
db_jobs_list()
,
db_jobs_reset()
,
db_jobs_runs_cancel()
,
db_jobs_runs_delete()
,
db_jobs_runs_export()
,
db_jobs_runs_get()
,
db_jobs_runs_get_output()
,
db_jobs_runs_list()
,
db_jobs_runs_submit()
,
db_jobs_update()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.