select: Select

Description Usage Arguments Value Note See Also Examples

Description

Selects a set of columns with names or Column expressions.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
select(x, col, ...)

## S4 method for signature 'SparkDataFrame'
x$name

## S4 replacement method for signature 'SparkDataFrame'
x$name <- value

## S4 method for signature 'SparkDataFrame,character'
select(x, col, ...)

## S4 method for signature 'SparkDataFrame,Column'
select(x, col, ...)

## S4 method for signature 'SparkDataFrame,list'
select(x, col)

Arguments

x

a SparkDataFrame.

col

a list of columns or single Column or name.

...

additional column(s) if only one column is specified in col. If more than one column is assigned in col, ... should be left empty.

name

name of a Column (without being wrapped by "").

value

a Column or an atomic vector in the length of 1 as literal value, or NULL. If NULL, the specified Column is dropped.

Value

A new SparkDataFrame with selected columns.

Note

$ since 1.4.0

$<- since 1.4.0

select(SparkDataFrame, character) since 1.4.0

select(SparkDataFrame, Column) since 1.4.0

select(SparkDataFrame, list) since 1.4.0

See Also

Other SparkDataFrame functions: SparkDataFrame-class, agg(), alias(), arrange(), as.data.frame(), attach,SparkDataFrame-method, broadcast(), cache(), checkpoint(), coalesce(), collect(), colnames(), coltypes(), createOrReplaceTempView(), crossJoin(), cube(), dapplyCollect(), dapply(), describe(), dim(), distinct(), dropDuplicates(), dropna(), drop(), dtypes(), exceptAll(), except(), explain(), filter(), first(), gapplyCollect(), gapply(), getNumPartitions(), group_by(), head(), hint(), histogram(), insertInto(), intersectAll(), intersect(), isLocal(), isStreaming(), join(), limit(), localCheckpoint(), merge(), mutate(), ncol(), nrow(), persist(), printSchema(), randomSplit(), rbind(), rename(), repartitionByRange(), repartition(), rollup(), sample(), saveAsTable(), schema(), selectExpr(), showDF(), show(), storageLevel(), str(), subset(), summary(), take(), toJSON(), unionAll(), unionByName(), union(), unpersist(), withColumn(), withWatermark(), with(), write.df(), write.jdbc(), write.json(), write.orc(), write.parquet(), write.stream(), write.text()

Other subsetting functions: filter(), subset()

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
## Not run: 
  select(df, "*")
  select(df, "col1", "col2")
  select(df, df$name, df$age + 1)
  select(df, c("col1", "col2"))
  select(df, list(df$name, df$age + 1))
  # Similar to R data frames columns can also be selected using $
  df[,df$age]

## End(Not run)

SparkR documentation built on June 3, 2021, 5:05 p.m.