Description Usage Arguments Value Note See Also Examples
Return a new SparkDataFrame containing rows in both this SparkDataFrame
and another SparkDataFrame while preserving the duplicates.
This is equivalent to INTERSECT ALL
in SQL. Also as standard in
SQL, this function resolves columns by position (not by name).
1 2 3 4 | intersectAll(x, y)
## S4 method for signature 'SparkDataFrame,SparkDataFrame'
intersectAll(x, y)
|
x |
a SparkDataFrame. |
y |
a SparkDataFrame. |
A SparkDataFrame containing the result of the intersect all operation.
intersectAll since 2.4.0
Other SparkDataFrame functions:
SparkDataFrame-class
,
agg()
,
alias()
,
arrange()
,
as.data.frame()
,
attach,SparkDataFrame-method
,
broadcast()
,
cache()
,
checkpoint()
,
coalesce()
,
collect()
,
colnames()
,
coltypes()
,
createOrReplaceTempView()
,
crossJoin()
,
cube()
,
dapplyCollect()
,
dapply()
,
describe()
,
dim()
,
distinct()
,
dropDuplicates()
,
dropna()
,
drop()
,
dtypes()
,
exceptAll()
,
except()
,
explain()
,
filter()
,
first()
,
gapplyCollect()
,
gapply()
,
getNumPartitions()
,
group_by()
,
head()
,
hint()
,
histogram()
,
insertInto()
,
intersect()
,
isLocal()
,
isStreaming()
,
join()
,
limit()
,
localCheckpoint()
,
merge()
,
mutate()
,
ncol()
,
nrow()
,
persist()
,
printSchema()
,
randomSplit()
,
rbind()
,
rename()
,
repartitionByRange()
,
repartition()
,
rollup()
,
sample()
,
saveAsTable()
,
schema()
,
selectExpr()
,
select()
,
showDF()
,
show()
,
storageLevel()
,
str()
,
subset()
,
summary()
,
take()
,
toJSON()
,
unionAll()
,
unionByName()
,
union()
,
unpersist()
,
withColumn()
,
withWatermark()
,
with()
,
write.df()
,
write.jdbc()
,
write.json()
,
write.orc()
,
write.parquet()
,
write.stream()
,
write.text()
1 2 3 4 5 6 7 | ## Not run:
sparkR.session()
df1 <- read.json(path)
df2 <- read.json(path2)
intersectAllDF <- intersectAll(df1, df2)
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.