Main BLOGGER
Google
WWW THIS BLOG
Wednesday, March 05, 2008
 
How to install MPI in Ubuntu
1. Go get Ubuntu at http://www.ubuntu.com/getubuntu/download

I am using Desktop version version 7.04, Feisty Fawn

After installing ubuntu, basically we will follow the instructions in

https://help.ubuntu.com/community/MPICHCluster
2. Linux environment setup
a. setup NFS server on master node (ub0)
a.1 install NFS server
sudo apt-get install nfs-kernel-server
a.2 sharing Master folder
add one line to /etc/exports
/mirror *(rw,sync)
a.3 create folder
sudo mkdir /mirror
a.4 start NFS service on master node
sudo /etc/init.d/nfs-kernel-server start
b. mounting master shared folder in other non-master nodes (
b.1 add the ip address of ub0 in /etc/hosts
b.2 create mount point and mount it
sudo mkdir /mirror
sudo mount ub0:/mirror /mirror
c. create a universal user mpiu on all nodes
sudo useradd -gusers -s/bin/bash -d/mirror -m mpiu

d. Setting up SSH with no pass phrase for communication between nodes
d.1 login as mpiu
sudo su - mpiu
d.2 generate DSA key for mpiu:
ssh­-keygen ­-t dsa
Note: Leave passphrase empty.
d.3 add content in id_dsa.pub to authorized keys
cd .ssh
cat id_dsa.pub >> authorized_keys
d.4 test
ssh ub1

3. Download MPICH-2 and build it
resources: MPICH-2 document

http://www.mcs.anl.gov/research/projects/mpich2/documentation/index.php?s=docs

downloads:

http://www.mcs.anl.gov/research/projects/mpich2/downloads/index.php?s=downloads

3.1 Installing GCC
sudo apt-get install build-essential
3.2 Installing MPICH2
cd /mirror
mkidr mpich2
tar xvf mpich2-­1.0.7rc1.tar.gz
cd mpich2­-1.0.7rc1
./configure --­prefix=/mirror/mpich2
make
sudo make install
3.3 setup linux environment variables in .bash_profile
add two lines into .bash_profile
export PATH=/mirror/mpich2/bin:$PATH
export LD_LIBRARY_PATH=/mirror/mpich2/lib:$LD_LIBRARY_PATH
update PATH in /etc/environment (variables defined there will
automatically apply at login for all sessions)
add /mirror/mpich2/bin into PATH

4. Setup MPICH-2 MPD
4.1 Create mpd.hosts in mpiu's home directory with nodes names, for
example
ub0
ub1
4.2 Create some credential
echo secretword=something >> ~/.mpd.conf
chmod 600 .mpd.conf
4.3. Start MPI MPD on the cluster for all nodes
mpdboot ­-n <number of hosts defined in mpd.hosts>
mpdtrace
The output should be name of all nodes.
4.4. compile an example and run it
in mpich2-1.0.7rc1/examples",there is a parallel program to
calculate PI (cpi)
copy cpi.c to /mirror
mpicc cpi.c -o cpi
mpiexec -n 4 ./cpi
it should display something like:
0: pi is approximately 3.1415926...., Error is .....
0: wall clock time = ....

troubleshoot in mpich2-doc-install.pdf Appendix A: Troubleshooting MPDs


HelloWorld.c (from wiki)
/*
"Hello World" Type MPI Test Program
*/
#include <mpi.h>
#include <stdio.h>
#include <string.h>

#define BUFSIZE 128
#define TAG 0

int main(int argc, char *argv[])
{
char idstr[32];
char buff[BUFSIZE];
int numprocs;
int myid;
int i;
MPI_Status stat;

MPI_Init(&argc,&argv); /* all MPI programs start with MPI_Init; all 'N'
processes exist thereafter */
MPI_Comm_size(MPI_COMM_WORLD,&numprocs); /* find out how big the SPMD
world is */
MPI_Comm_rank(MPI_COMM_WORLD,&myid); /* and this processes' rank is */

/* At this point, all the programs are running equivalently, the rank
is used to
distinguish the roles of the programs in the SPMD model, with rank 0
often used
specially... */
if(myid == 0)
{
printf("%d: We have %d processors\n", myid, numprocs);
for(i=1;i<numprocs;i++)
{
sprintf(buff, "Hello %d! ", i);
MPI_Send(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD);
}
for(i=1;i<numprocs;i++)
{
MPI_Recv(buff, BUFSIZE, MPI_CHAR, i, TAG, MPI_COMM_WORLD, &stat);
printf("%d: %s\n", myid, buff);
}
}
else
{
/* receive from rank 0: */
MPI_Recv(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD, &stat);
sprintf(idstr, "Processor %d ", myid);
strcat(buff, idstr);
strcat(buff, "reporting for duty\n");
/* send to rank 0: */
MPI_Send(buff, BUFSIZE, MPI_CHAR, 0, TAG, MPI_COMM_WORLD);
}

MPI_Finalize(); /* MPI Programs end with MPI Finalize; this is a weak
synchronization point */
return 0;
}



Powered by Blogger

Google
WWW THIS BLOG