Takes either an R matrix and distributes it as a distributed matrix, or takes a distributed matrix and redistributes it across a (possibly) new BLACS context, using a (possibly) new blocking dimension.

1 2 3 |

`dx` |
numeric distributed matrix |

`bldim` |
the blocking dimension for block-cyclically distributing the matrix across the process grid. |

`ICTXT` |
BLACS context number for return. |

`distribute()`

takes an R matrix `x`

stored on the processes in
some fashion and distributes it across the process grid belonging to
`ICTXT`

. If a process is to call `distribute()`

and does not yet
have any ownership of the matrix `x`

, then that process should store
`NULL`

for `x`

.

How one might typically use this is to read in a non-distributed matrix on
the first process, store that result as the R matrix `x`

, and then have
the other processes store `NULL`

for `x`

. Then calling
`distribute()`

returns the distributed matrix which was distributed
according to the options `bldim`

and `ICTXT`

.

Using an `ICTXT`

value other than zero is not recommended unless you
have a good reason to. Use of other such contexts should only be considered
for advanced users, preferably those with knowledge of ScaLAPACK.

`redistribute()`

takes a distributed matrix and redistributes it to the
(possibly) new process grid with BLACS context `ICTXT`

and with the
(possibly) new blocking dimension `bldim`

. The original BLACS context
is `dx@ICTXT`

and the original blocking dimension is `dx@bldim`

.

These two functions are essentially simple wrappers for the ScaLAPACK
function PDGEMR2D, with the above described behavior. Of note, for
`distribute()`

, `dx@ICTXT`

and `ICTXT`

must share at least one
process in common. Likewise for `redistribute()`

with `xCTXT`

and
`ICTXT`

.

Very general redistributions can be done with `redistribute()`

, but
thinking in these terms is an acquired skill. For this reason, several
simple interfaces to this function have been written.

Returns a distributed matrix.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | ```
## Not run:
# Save code in a file "demo.r" and run with 2 processors by
# > mpiexec -np 2 Rscript demo.r
library(pbdDMAT, quiet = TRUE)
init.grid()
if (comm.rank()==0){
x <- matrix(1:16, ncol=4)
} else {
x <- NULL
}
dx <- distribute(x, bldim=c(4,4))
print(dx)
dx <- redistribute(dx, bldim=c(3,3))
print(dx)
finalize()
## End(Not run)
``` |

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.