rbind: Union two or more SparkDataFrames

Description Usage Arguments Details Value Note See Also Examples

Description

Union two or more SparkDataFrames by row. As in R's rbind, this method requires that the input SparkDataFrames have the same column names.

Usage

1
2
3
4
rbind(..., deparse.level = 1)

## S4 method for signature 'SparkDataFrame'
rbind(x, ..., deparse.level = 1)

Arguments

...

additional SparkDataFrame(s).

deparse.level

currently not used (put here to match the signature of the base implementation).

x

a SparkDataFrame.

Details

Note: This does not remove duplicate rows across the two SparkDataFrames.

Value

A SparkDataFrame containing the result of the union.

Note

rbind since 1.5.0

See Also

union unionByName

Other SparkDataFrame functions: SparkDataFrame-class, agg(), alias(), arrange(), as.data.frame(), attach,SparkDataFrame-method, broadcast(), cache(), checkpoint(), coalesce(), collect(), colnames(), coltypes(), createOrReplaceTempView(), crossJoin(), cube(), dapplyCollect(), dapply(), describe(), dim(), distinct(), dropDuplicates(), dropna(), drop(), dtypes(), exceptAll(), except(), explain(), filter(), first(), gapplyCollect(), gapply(), getNumPartitions(), group_by(), head(), hint(), histogram(), insertInto(), intersectAll(), intersect(), isLocal(), isStreaming(), join(), limit(), localCheckpoint(), merge(), mutate(), ncol(), nrow(), persist(), printSchema(), randomSplit(), rename(), repartitionByRange(), repartition(), rollup(), sample(), saveAsTable(), schema(), selectExpr(), select(), showDF(), show(), storageLevel(), str(), subset(), summary(), take(), toJSON(), unionAll(), unionByName(), union(), unpersist(), withColumn(), withWatermark(), with(), write.df(), write.jdbc(), write.json(), write.orc(), write.parquet(), write.stream(), write.text()

Examples

1
2
3
4
5
## Not run: 
sparkR.session()
unions <- rbind(df, df2, df3, df4)

## End(Not run)

SparkR documentation built on June 3, 2021, 5:05 p.m.