knitr::opts_chunk$set( collapse = TRUE, comment = "#>", eval = FALSE ) library(brickster)
{brickster}
provides mechanisms to run code against Databricks, below is an overview of the available of those in the package:
+--------------------------------------------------------------------------------------------------------------+-----------------------+-------------------------------------------------------------------------------+
| Method | Compatible Compute | Notes |
+==============================================================================================================+=======================+===============================================================================+
| db_sql_query
| SQL Warehouse | Simple and efficient function to run SQL |
+--------------------------------------------------------------------------------------------------------------+-----------------------+-------------------------------------------------------------------------------+
| SQL execution API\ | SQL Warehouse | Lower level functions that align 1:1 with API endpoints |
| (db_sql_exec_*
) | | |
+--------------------------------------------------------------------------------------------------------------+-----------------------+-------------------------------------------------------------------------------+
| Command execution context manager\ | Clusters\ | Higher level R6
class for command execution contexts. |
| (db_context_manager
) | (Shared, Single User) | |
+--------------------------------------------------------------------------------------------------------------+-----------------------+-------------------------------------------------------------------------------+
| Command execution API\ | Clusters\ | Lower level functions that align 1:1 with API endpoints |
| (db_context_*
) | (Shared, Single User) | |
+--------------------------------------------------------------------------------------------------------------+-----------------------+-------------------------------------------------------------------------------+
| Databricks REPL\ | Clusters\ | Supports all notebook languages, R is only supported on single user clusters. |
| (db_repl
) | (Shared, Single User) | |
+--------------------------------------------------------------------------------------------------------------+-----------------------+-------------------------------------------------------------------------------+
Databricks REPL (db_repl()
) will be the focus of this article.
The REPL temporarily connects the existing R console to a Databricks cluster (via command execution APIs) and allows code in all supported languages to be sent interactively - as if it were running locally.
Using the REPL is simple, to start just provide cluster_id
:
# start REPL db_repl(cluster_id = "<insert cluster id>")
The REPL will check the clusters state and start the cluster if inactive. The default language is R
.
After successfully connecting to the cluster you can run commands against the remote compute from the local session.
The REPL has a shortcut you can enter :<language>
to change the active language. You can change between the following languages:
| Language | Shortcut |
|----------|----------|
| R | :r
|
| Python | :py
|
| SQL | :sql
|
| Scala | :scala
|
| Shell | :sh
|
When you change between languages all variables should persist unless REPL is exited.
Development environments (e.g. RStudio, Positron) won't display variables from the remote contexts in the environment pane
HTML content will only render for Python, {htmlwidgets}
rendering is restricted due to notebook limitations that require a workaround currently
Not designed to work with interactive serverless compute
Cannot persist or recover sessions
Multi-line expressions are only supported for R. Python, Scala, and SQL are limited to single line expressions.
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.