sub-.tbl_spark: Subsetting operator for Spark dataframe

[.tbl_sparkR Documentation

Subsetting operator for Spark dataframe

Description

Susetting operator for Spark dataframe allowing a subset of column(s) to be selected using syntaxes similar to those supported by R dataframes

Usage

## S3 method for class 'tbl_spark'
x[i]

Arguments

x

The Spark dataframe

i

Expression specifying subset of column(s) to include or exclude from the result (e.g., '["col1"]', '[c("col1", "col2")]', '[1:10]', '[-1]', '[NULL]', or '[]')

Examples


## Not run: 
library(sparklyr)
sc <- spark_connect(master = "spark://HOST:PORT")
example_sdf <- copy_to(sc, tibble::tibble(a = 1, b = 2))
example_sdf["a"] %>% print()

## End(Not run)


sparklyr documentation built on Nov. 2, 2023, 5:09 p.m.