| cache_table | R Documentation |
Spark SQL can cache tables using an in-memory columnar format by calling
cache_table(). Spark SQL will scan only required columns and will
automatically tune compression to minimize memory usage and GC pressure.
You can call uncache_table() to remove the table from memory. Similarly you
can call clear_cache() to remove all cached tables from the in-memory
cache. Finally, use is_cached() to test whether or not a table is cached.
cache_table(sc, table) clear_cache(sc) is_cached(sc, table) uncache_table(sc, table)
sc |
A |
table |
|
cache_table(): If successful, TRUE, otherwise FALSE.
clear_cache(): NULL, invisibly.
is_cached(): A logical(1) vector indicating TRUE if the table is
cached and FALSE otherwise.
uncache_table(): NULL, invisibly.
create_table(), get_table(), list_tables(), refresh_table(),
table_exists(), uncache_table()
## Not run: sc <- sparklyr::spark_connect(master = "local") mtcars_spark <- sparklyr::copy_to(dest = sc, df = mtcars) # By default the table is not cached is_cached(sc = sc, table = "mtcars") # We can manually cache the table cache_table(sc = sc, table = "mtcars") # And now the table is cached is_cached(sc = sc, table = "mtcars") # We can uncache the table uncache_table(sc = sc, table = "mtcars") is_cached(sc = sc, table = "mtcars") ## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.