A total of five machines are fitted with CentOS6.4 system, host names are node0, node1, node2, node3, node4. node0 as a master node, where the master node refers to node0 as an NFS server side.
MPICH2 installation package: mpich2-1.2.1p1.tar.gz, can be downloaded from the official website MPICH
The following operations are using the root user name
Configuration 5 machine network. For details, see my other article "CentOS static IP network configuration"
2. Create a unified user name and the same password for the cluster of five machines:
Modify / etc / sudoers file, add the following line to make cluster has root privileges temporary capacity:
cluster ALL = (ALL) ALL
The following operations are using the cluster user name
3. Configure SSH, so no password is required between any two machines can directly log on to each other. For details, see my other article "two hosts without a password and log in directly to each other SSH configuration"
4. Configure NFS, node0 as a server-side, four other machine as the client, all the machines are shared directory / home / cluster / mirror, configuration procedure, see my other article "CentOS installation process under the NFS"
5. Install MPICH2 development environment on node0
First make sure the system is installed gcc, g ++, make and compile these tools python
Create MPICH2 installation directory:
mkdir / home / cluster / mirror / mpich2
The mpich2-1.2.1p1.tar.gz upload / home / cluster / mirror, and unzip:
tar -zxv -f mpich2-1.2.1p1.tar.gz
Run the following command in the / home / cluster / mirror / mpich2-1.2.1p1 directory:
./configure --prefix = / home / cluster / mirror / mpich2
Create /home/cluster/mpd.hosts file, as follows:
6. Configure environment variables on five machines
In /home/cluster/.bashrc document added:
export PATH = $ PATH: / home / cluster / mirror / mpich2 / bin
export LD_LIBRARY_PATH = $ LD_LIBRARY_PATH: / home / cluster / mirror / mpich2 / lib
Make configuration files take effect:
MPICH2 test whether the installation was successful:
Create /home/cluster/.mpd.conf file, as follows ( "" the contents of an arbitrary string, but all machines to be the same):
secretword = "lab311"
.mpd.conf Modify file permissions so that only the user cluster have read and write permissions to the file:
chmod 600 /home/cluster/.mpd.conf
Test each machine can start mpd Manager, under normal circumstances, the results of the present mpdtrace the host name of the machine, pay attention to after the test must use mpdallexit closed mpd command, otherwise there will be a connection failure error when you start behind the cluster:
7. node0 test the entire cluster
Before starting the cluster you will need to turn off the firewall for all machines:
sudo service iptables stop
sudo chkconfig iptables off
Start Cluster (parameter -n 5 start showing five machines):
mpdboot -n 5 -f /home/cluster/mpd.hosts
Check the machine has been launched:
Under normal circumstances will appear the following results:
Shut down the cluster: