**NOTE: As of now, I have not been able to get this to work. Using mpiexec leads to duplication of the same task accross all nodes in the cluster. If I do not use mpiexec I get errors along the lines of match_arg (utils/args/args.c:159): unrecognized argument pmi_args. So I have gone back to using SOCK based clusters for now. **

If you are working on a cluster and need to install MPICH and Rmpi from source, there are a couple tricks. This is very similar to installing OpenMPI and Rmpi from source, but adapted for MPICH. OpenMPI was giving me headaches on our HPC cluster, so I am moving to MPICH.

Installing MPICH is pretty straight forward:

wget http://www.mpich.org/static/downloads/3.2/mpich-3.2.tar.gz
tar -zxvf mpich-3.2.tar.gz
cd mpich-3.2
./configure --prefix=/path/to/your/local
make install

You can then build the tests as well if desired. (Note that

cd tests/mpi
./configure --prefix=/path/to/your/local

And get a cup of coffee. Then you need to let R know where MPICH is and what kind of MPI it is:

install.packages("Rmpi", configure.args = 
					    "--with-Rmpi-type=MPICH2",                                                                 "--with-mpi=/primary/projects/jovinge/local"))

Then you should be set to go. Let’s adapt an example [from R bloggers] (http://www.r-bloggers.com/a-very-short-and-unoriginal-introduction-to-snow/) to our purposes:

cl <- makeCluster(20, type = "MPI")
x <- sample(0:50, 1000, replace = TRUE)
bs.mean <- function(v) {
   s <- sample(v, length(v), replace = TRUE)
clusterCall(cl, bs.mean, x)