qr_allreduce: QR Reduction

Description Usage Arguments Details Value See Also Examples

View source: R/qr_allreduce.r

Description

Given an R matrix from a QR factorization on each process, this computes the R matrix which is conceptually calculated by "stacking" all R matrices on top of each other (like do.call(rbind, R_list)). The way this is actually calculated is much more memory efficient, and probably more run time efficient as well, if the local R matrices aren't very small.

Usage

1
2
3
qr_allreduce(x, comm = 0L, type = "double")

qr_reduce(x, root = 0L, comm = 0L, type = "double")

Arguments

x

The input data. Should be a numeric matrix. The matrix should be the same dimensions across all processes.

comm

MPI communicator number.

type

The precision used for the intermediate calculations. Should be one of "double" or "float".

root

MPI rank that should receive the return in the non-all version.

Details

This can be used to create a TSQR. If the tall matrix is split by rows across processes, compute the R matrix on the local chunk and then hand it to qr_reduce().

This works by defining a custom MPI data type (dense matrix) with a custom reduction operation (given 2 R matrices, "stack" them, compute the stacked QR and emit R). Each local operation uses the LAPACK functions _geqp3(), similar to R's qr() with LAPACK=TRUE.

Value

If the all version is called or if the calling rank is equal to root, then a numeric matrix is returned, and otherwise NULL.

See Also

cop_allreduce()

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
## Not run: 
suppressMessages(library(cop))

x = matrix(1:4, 2) + 10*comm.rank()
out = qr_reduce(x)
mpi_print(out)

finalize()

## End(Not run)

RBigData/cop documentation built on March 10, 2021, 8:21 p.m.