|
This experiment focuses on the construction of high-availability cluster Mysql; the other not much to say, just start following installation configuration
I. Introduction and preparation environment
1, this is configured with two nodes: nod1.allen.com (172.16.14.1) and nod2.allen.com (172.16.14.2)
###### In NOD1 execute the following command with node NOD2
cat> / etc / hosts << EOF
172.16.14.1 nod1.allen.com nod1
172.16.14.2 nod2.allen.com nod2
EOF
Note: Let the host name and corresponding IP addresses of all nodes can be resolved properly
2, the host name of each node to be with the results of "uname -n" command like
###### NOD1 node performs
sed -i 's @ (HOSTNAME = ). * @ 1nod1.allen.com@g' / etc / sysconfig / network
hostname nod1.allen.com
###### NOD2 node performs
sed -i 's @ (HOSTNAME = ). * @ 1nod2.allen.com@g' / etc / sysconfig / network
hostname nod2.allen.com
Note: to modify the file system must be restarted to take effect, here to modify the file and then execute the command to change the host name can not restart
3, nod1 on nod2 two nodes each provided with a partition the same size as a DRBD device, here we were created on both nodes "/ dev / sda3" as DRBD device, the size of the capacity of 2G
###### Create a partition, the partition size must be kept on the same node individually NOD1 and NOD2
fdisk / dev / sda
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (7859-15665, default 7859):
Using default value 7859
Last cylinder, + cylinders or + size {K, M, G} (7859-15665, default 15665): + 2G
Command (m for help): w
partx / dev / sda # kernel to re-read the partition
###### Viewing the kernel has not identified partition, if no restart is required, there is no need to reboot the system identification
cat / proc / partitions
major minor #blocks name
8 0 125829120 sda
8 1 204800 sda1
8 2 62914560 sda2
253 0 20971520 dm-0
253 1 2097152 dm-1
253 2 10485760 dm-2
253 3 20971520 dm-3
reboot
4, closed two servers SELinux, Iptables and NetworkManager
Close SELinux setenforce 0 #
service iptables stop # Iptables Close
chkconfig iptables off # ban Iptables boot
service NetworkManager stop
chkconfig NetworkManager off
chkconfig --list NetworkManager
NetworkManager 0: off 1: off 2: off 3: off 4: off 5: off 6: off
chkconfig network on
chkconfig --list network
network 0: off 1: off 2: on 3: on 4: on 5: on 6: off
++++++++++++++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++
Note: the process of doing must be closed NetworkManager service is shut down and set the boot does not start automatically; set the network service boot from the start; otherwise experimenting process will cause unnecessary trouble, causing the cluster system can not function properly
5, configured YUM source and synchronize time, and to ensure that the time between two nodes to be synchronized epel source download
###### Source configuration epel
###### NOD1 were installed in the node and NOD2
rpm -ivh epel-release-6-8.noarch.rpm
6, dual-system trust
[Root @ nod1 ~] # ssh-keygen -t rsa
[Root @ nod1 ~] # ssh-copy-id -i .ssh / id_rsa.pub nod2
==================================================
[Root @ nod2 ~] # ssh-keygen -t rsa
[Root @ nod2 ~] # ssh-copy-id -i .ssh / id_rsa.pub nod1
7, system version: CentOS 6.4_x86_64
8, using the software: pacemaker and wherein there is in the CD image corosync
pssh-2.3.1-2.el6.x86_64 download Annex
crmsh-1.2.6-4.el6.x86_64 download Annex
drbd-8.4.3-33.el6.x86_64 DRBD Download: http: //rpmfind.net
drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64
mysql-5.5.33-linux2.6-x86_64 Click here to download
pacemaker-1.1.8-7.el6.x86_64
corosync-1.4.1-15.el6.x86_64
-------------------------------------------------- ------------------------------
Second, the installation configuration DRBD DRBD Comments
1, the installation packages on NOD1 with DRBD node NOD2
###### NOD1
[Root @ nod1 ~] # ls drbd- *
drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm
[Root @ nod1 ~] # yum -y install drbd -. * Rpm
###### NOD2
[Root @ nod2 ~] # ls drbd- *
drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm
[Root @ nod2 ~] # yum -y install drbd -. * Rpm
2, view the DRBD configuration file
ll /etc/drbd.conf;ll /etc/drbd.d/
-rw-r - r-- 1 root root 133 May 14 21:12 /etc/drbd.conf # master configuration file
total 4
-rw-r - r-- 1 root root 1836 May 14 21:12 global_common.conf # global profile
###### View the main configuration file contents
cat /etc/drbd.conf
###### Master configuration file contains the global configuration file and "drbd.d /" directory files ending with .res
# You can find an example in /usr/share/doc/drbd.../drbd.conf.example
include "drbd.d / global_common.conf";
include ". drbd.d / * res";
3, modify the configuration file as follows:
[Root @ nod1 ~] #vim /etc/drbd.d/global_common.conf
global {
usage-count no; # whether to participate in DRBD usage statistics, the default is yes
# Minor-count dialog-refresh disable-ip-verification
}
common {
protocol C; # use DRBD synchronization protocol
handlers {
# These are EXAMPLE handlers only.
# They may have severe implications,
# Like hard resetting the node under certain circumstances.
# Be careful when chosing your poison.
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b> / proc / sysrq- trigger; reboot -f ";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b> / proc / sysrq- trigger; reboot -f ";
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o> / proc / sysrq-trigger; halt -f" ;
# Fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
# Split-brain "/usr/lib/drbd/notify-split-brain.sh root";
# Out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
# Before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 - -c 16k";
# After-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}
startup {
# Wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
}
options {
# Cpu-mask on-no-data-accessible
}
disk {
on-io-error detach; # configuration of I / O error handling policy for separation
# Size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes
# Disk-drain md-flushes resync-rate resync-after al-extents
# C-plan-ahead c-delay-target c-fill-target c-max-rate
# C-min-rate disk-timeout
}
net {
cram-hmac-alg "sha1"; # Set the encryption algorithm
shared-secret "allendrbd"; # set the encryption key
# Protocol timeout max-epoch-size max-buffers unplug-watermark
# Connect-int ping-int sndbuf-size rcvbuf-size ko-count
# Allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
# After-sb-1pri after-sb-2pri always-asbp rr-conflict
# Ping-timeout data-integrity-alg tcp-cork on-congestion
# Congestion-fill congestion-extents csums-alg verify-alg
# Use-rle
}
syncer {
rate 1024M; # set network speed synchronization of standby node
}
}
4. Add the resource file:
[Root @ nod1 ~] # vim /etc/drbd.d/drbd.res
resource drbd {
on nod1.allen.com {# description beginning on the first host to, followed by the host name
device / dev / drbd0; #DRBD device name
disk / dev / sda3; partition # drbd0 used is "sda3"
address 172.16.14.1:7789; # set DRBD listen address and port
meta-disk internal;
}
on nod2.allen.com {
device / dev / drbd0;
disk / dev / sda3;
address 172.16.14.2:7789;
meta-disk internal;
}
}
5, will provide a profile for NOD2
[Root @ nod1 ~] # scp /etc/drbd.d/{global_common.conf,drbd.res} nod2: /etc/drbd.d/
The authenticity of host 'nod2 (172.16.14.2)' can not be established.
RSA key fingerprint is 29: d3: 28: 85: 20: a1: 1f: 2a: 11: e5: 88: cd: 25: d0: 95: c7.
Are you sure you want to continue connecting (yes / no)? Yes
Warning: Permanently added 'nod2' (RSA) to the list of known hosts.
root @ nod2's password:
global_common.conf 100% 1943 1.9KB / s 00:00
drbd.res 100% 318 0.3KB / s 00:00
6, initialization resources and start the service
###### On NOD1 and NOD2 node initialization resources and start the service
[Root @ nod1 ~] # drbdadm create-md drbd
Writing meta data ...
initializing activity log
NOT initializing bitmap
lk_bdev_save (/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
New drbd meta data block successfully created. # Prompt has been successfully created
lk_bdev_save (/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
###### Start Service
[Root @ nod1 ~] # service drbd start
Starting DRBD resources: [
create res: drbd
prepare disk: drbd
adjust disk: drbd
adjust net: drbd
]
..........
************************************************** *************
DRBD's startup script waits for the peer node (s) to appear.
- In case this node was already a degraded cluster before the
reboot the timeout is 0 seconds. [degr-wfc-timeout]
- If the peer was available before the reboot the timeout will
expire after 0 seconds. [wfc-timeout]
(These values are for resource 'drbd'; 0 sec -> wait forever)
To abort waiting enter 'yes' [12]: yes
7, initialize the device sync
[Root @ nod1 ~] # drbdadm - --overwrite-data-of-peer primary drbd
[Root @ nod1 ~] # cat / proc / drbd # see the Sync progress
version: 8.4.3 (api: 1 / proto: 86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner @, 2013-05-27 04:30:21
0: cs: SyncSource ro: Primary / Secondary ds: UpToDate / Inconsistent C r --- n-
ns: 1897624 nr: 0 dw: 0 dr: 1901216 al: 0 bm: 115 lo: 0 pe: 3 ua: 3 ap: 0 ep: 1 wo: f oos: 207988
[=================> ..] Sync'ed: 90.3% (207988/2103412) K
finish: 0:00:07 speed: 26,792 (27,076) K / sec
As in the following state when synchronization is complete ######
version: 8.4.3 (api: 1 / proto: 86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner @, 2013-05-27 04:30:21
0: cs: Connected ro: Primary / Secondary ds: UpToDate / UpToDate C r -----
ns: 2103412 nr: 0 dw: 0 dr: 2104084 al: 0 bm: 129 lo: 0 pe: 0 ua: 0 ap: 0 ep: 1 wo: f oos: 0
Note: drbd: resource name
###### View synchronization progress can also use the following command
drbd-overview
8, create a file system
###### Formatted file system
[Root @ nod1 ~] # mkfs.ext4 / dev / drbd0
9, with the ban on NOD1 NOD2 node DRBD service boot from the start
[Root @ nod1 ~] # chkconfig drbd off
[Root @ nod1 ~] # chkconfig --list drbd
drbd 0: off 1: off 2: off 3: off 4: off 5: off 6: off
================================================== ===================
[Root @ nod2 ~] # chkconfig drbd off
[Root @ nod2 ~] # chkconfig --list drbd
drbd 0: off 1: off 2: off 3: off 4: off 5: off 6: off
Third, the installation Mysql
1. Install and configure Mysql
###### Mysql installed on NOD1 node
[Root @ nod1 ~] # mkdir / mydata
[Root @ nod1 ~] # mount / dev / drbd0 / mydata /
[Root @ nod1 ~] # mkdir / mydata / data
[Root @ nod1 ~] # tar xf mysql-5.5.33-linux2.6-x86_64.tar.gz -C / usr / local /
[Root @ nod1 ~] # cd / usr / local /
[Root @ nod1 local] # ln -s mysql-5.5.33-linux2.6-x86_64 mysql
[Root @ nod1 local] # cd mysql
[Root @ nod1 mysql] # cp support-files / my-large.cnf /etc/my.cnf
[Root @ nod1 mysql] # cp support-files / mysql.server /etc/init.d/mysqld
[Root @ nod1 mysql] # chmod + x /etc/init.d/mysqld
[Root @ nod1 mysql] # chkconfig --add mysqld
[Root @ nod1 mysql] # chkconfig mysqld off
[Root @ nod1 mysql] # vim /etc/my.cnf
datadir = / mydata / data
innodb_file_per_table = 1
[Root @ nod1 mysql] # echo "PATH = / usr / local / mysql / bin: $ PATH" >> / etc / profile
[Root @ nod1 mysql] #. / Etc / profile
[Root @ nod1 mysql] # useradd -r -u 306 mysql
[Root @ nod1 mysql] # chown mysql.mysql -R / mydata
[Root @ nod1 mysql] # chown root.mysql *
[Root @ nod1 mysql] # ./scripts/mysql_install_db --user = mysql --datadir = / mydata / data /
[Root @ nod1 mysql] # service mysqld start
Starting MySQL ..... [OK]
[Root @ nod1 mysql] # chkconfig --list mysqld
mysqld 0: off 1: off 2: off 3: off 4: off 5: off 6: off
[Root @ nod1 mysql] # service mysqld stop
Shutting down MySQL. [OK]
###### Mysql installed on the node NOD2
[Root @ nod2 ~] # scp nod1: /root/mysql-5.5.33-linux2.6-x86_64.tar.gz ./
[Root @ nod2 ~] # mkdir / mydata
[Root @ nod2 ~] # tar xf mysql-5.5.33-linux2.6-x86_64.tar.gz -C / usr / local /
[Root @ nod2 ~] # cd / usr / local /
[Root @ nod2 local] # ln -s mysql-5.5.33-linux2.6-x86_64 mysql
[Root @ nod2 local] # cd mysql
[Root @ nod2 mysql] # cp support-files / my-large.cnf /etc/my.cnf
###### Modify the configuration file to add the following configuration
[Root @ nod2 mysql] # vim /etc/my.cnf
datadir = / mydata / data
innodb_file_per_table = 1
[Root @ nod2 mysql] # cp support-files / mysql.server /etc/init.d/mysqld
[Root @ nod2 mysql] # chkconfig --add mysqld
[Root @ nod2 mysql] # chkconfig mysqld off
[Root @ nod2 mysql] # useradd -r -u 306 mysql
[Root @ nod2 mysql] # chown -R root.mysql *
2. Uninstall DRBD device on the node and then downgrade NOD1
[Root @ nod1 ~] # drbd-overview
0: drbd / 0 Connected Primary / Secondary UpToDate / UpToDate C r -----
[Root @ nod1 ~] # umount / mydata /
[Root @ nod1 ~] # drbdadm secondary drbd
[Root @ nod1 ~] # drbd-overview
0: drbd / 0 Connected Secondary / Secondary UpToDate / UpToDate C r -----
3, NOD2 node upgrade DBRD based and mount DRBD device
[Root @ nod2 ~] # drbd-overview
0: drbd / 0 Connected Secondary / Secondary UpToDate / UpToDate C r -----
[Root @ nod2 ~] # drbdadm primary drbd
[Root @ nod2 ~] # drbd-overview
0: drbd / 0 Connected Primary / Secondary UpToDate / UpToDate C r -----
[Root @ nod2 ~] # mount / dev / drbd0 / mydata /
4. Start Mysql service on NOD2 node test
[Root @ nod2 ~] # chown -R mysql.mysql / mydata
[Root @ nod2 ~] # service mysqld start
Starting MySQL .. [OK]
[Root @ nod2 ~] # service mysqld stop
Shutting down MySQL. [OK]
[Root @ nod2 ~] # chkconfig --list mysqld
mysqld 0: off 1: off 2: off 3: off 4: off 5: off 6: off
5, DRBD services are set to the standby node, such as:
[Root @ nod2 ~] # drbdadm secondary drbd
[Root @ nod2 ~] # drbd-overview
0: drbd / 0 Connected Secondary / Secondary UpToDate / UpToDate C r -----
6, unloading equipment and stop DRBD DRBD service on node NOD1 and NOD2
[Root @ nod2 ~] # umount / mydata /
[Root @ nod2 ~] # service drbd stop
Stopping all DRBD resources:.
[Root @ nod1 ~] # service drbd stop
Stopping all DRBD resources:.
-------------------------------------------------- ------------------------------
Fourth, the installation Corosync + Pacemaker Software
1, mounted on NOD1 and NOD2 node
[Root @ nod1 ~] # yum -y install crmsh * .rpm pssh * .rpm pacemaker corosync
[Root @ nod2 ~] # scp nod1: / root / {pssh * .rpm, crmsh * .rpm} ./
[Root @ nod2 ~] # yum -y install crmsh * .rpm pssh * .rpm pacemaker corosync
2, the configuration Corosync on NOD1
[Root @ nod1 ~] # cd / etc / corosync /
[Root @ nod1 corosync] # ls
corosync.conf.example corosync.conf.example.udpu service.d uidgid.d
[Root @ nod1 corosync] # cp corosync.conf.example corosync.conf
[Root @ nod1 corosync] # vim corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
version: 2 # version
secauth: on # whether to open the safety certification
threads: 0 # How many existing certification, 0 unlimited
interface {
ringnumber: 0
bindnetaddr: 172.16.0.0 # by which network communications
mcastaddr: 226.94.14.12 # multicast address
mcastport: 5405 # multicast port
ttl: 1
}
}
logging {
fileline: off
to_stderr: no # whether to send standard error output
to_logfile: yes # whether to open a log
to_syslog: no # whether to open the system log, the proposed closure of a
logfile: /var/log/cluster/corosync.log # log storage path must manually create the directory
debug: off
timestamp: on # log for recording time
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
service {# add support using Pacemaker
ver: 0
name: pacemaker
}
aisexec {# whether openais, sometimes may be used
user: root
group: root
}
3, communication between nodes is used to generate an authentication key file
[Root @ nod1 corosync] # corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from / dev / random.
Press keys on your keyboard to generate entropy.
Press keys on your keyboard to generate entropy (bits = 152).
Press keys on your keyboard to generate entropy (bits = 216).
NOTE: When generating a key issue if the above described random number is not enough, you can install software to solve
4. Copy the configuration files and authentication files to a node NOD2
[Root @ nod1 corosync] # scp authkey corosync.conf nod2: / etc / corosync /
authkey 100% 128 0.1KB / s 00:00
corosync.conf 100% 522 0.5KB / s 00:00
5, start the service Corosync
[Root @ nod1 ~] # service corosync start
Starting Corosync Cluster Engine (corosync): [OK]
###### To see if the engine starts normally corosync
[Root @ nod1 ~] # grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log
Sep 19 18:44:36 corosync [MAIN] Corosync Cluster Engine ( '1.4.1'): started and ready to provide service.
Sep 19 18:44:36 corosync [MAIN] Successfully read main configuration file '/etc/corosync/corosync.conf'.
###### View the boot process if an error message; the following message can be ignored
[Root @ nod1 ~] # grep ERROR: /var/log/cluster/corosync.log
Sep 19 18:44:36 corosync [pcmk] ERROR: process_ais_conf: You have configured a cluster using the Pacemaker plugin for Corosync The plugin is not supported in this environment and will be removed very soon..
Sep 19 18:44:36 corosync [pcmk] ERROR: process_ais_conf: Please see Chapter 8 of 'Clusters from Scratch' (http://www.clusterlabs.org/doc) for details on using Pacemaker with CMAN
###### Initialize member nodes to see if the normal notice issued
[Root @ nod1 ~] # grep TOTEM /var/log/cluster/corosync.log
Sep 19 18:44:36 corosync [TOTEM] Initializing transport (UDP / IP Multicast).
Sep 19 18:44:36 corosync [TOTEM] Initializing transmit / receive security: libtomcrypt SOBER128 / SHA1HMAC (mode 0).
Sep 19 18:44:36 corosync [TOTEM] The network interface [172.16.14.1] is now up.
Sep 19 18:44:36 corosync [TOTEM] A processor joined or left the membership and a new membership was formed.
###### View pacemaker starts correctly
[Root @ nod1 ~] # grep pcmk_startup /var/log/cluster/corosync.log
Sep 19 18:44:36 corosync [pcmk] info: pcmk_startup: CRM: Initialized
Sep 19 18:44:36 corosync [pcmk] Logging: Initialized pcmk_startup
Sep 19 18:44:36 corosync [pcmk] info: pcmk_startup: Maximum core file size is: 18446744073709551615
Sep 19 18:44:36 corosync [pcmk] info: pcmk_startup: Service: 9
Sep 19 18:44:36 corosync [pcmk] info: pcmk_startup: Local hostname: nod1.allen.com
6, the start node NOD2 Corosync Service
[Root @ nod1 ~] # ssh nod2 'service corosync start'
Starting Corosync Cluster Engine (corosync): [OK]
###### View cluster node startup state
[Root @ nod1 ~] # crm status
Last updated: Thu Sep 19 19:01:33 2013
Last change: Thu Sep 19 18:49:09 2013 via crmd on nod1.allen.com
Stack: classic openais (with plugin)
Current DC: nod1.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
0 Resources configured.
Online: [nod1.allen.com nod2.allen.com] # two nodes are normal start
7 See related processes started Corosync
[Root @ nod1 ~] # ps auxf
root 10336 0.3 1.2 556824 4940? Ssl 18:44 0:04 corosync
305 10342 0.0 1.7 87440 7076? S 18:44 0:01 _ / usr / libexec / pacemaker / cib
root 10343 0.0 0.8 81460 3220? S 18:44 0:00 _ / usr / libexec / pacemaker / stonit
root 10344 0.0 0.7 73088 2940? S 18:44 0:00 _ / usr / libexec / pacemaker / lrmd
305 10345 0.0 0.7 85736 3060? S 18:44 0:00 _ / usr / libexec / pacemaker / attrd
305 10346 0.0 4.7 116932 18812? S 18:44 0:00 _ / usr / libexec / pacemaker / pengin
305 10347 0.0 1.0 143736 4316? S 18:44 0:00 _ / usr / libexec / pacemaker / crmd
Fifth, the allocation of resources
1, Corosync enabled by default Stonith, while the current cluster, and there is no corresponding Stonith, the following error; you need to disable Stonith
[Root @ nod1 ~] # crm_verify -L -V
error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
-V May provide more details
###### Disabled Stonith and see
[Root @ nod1 ~] # crm configure property stonith-enabled = false
[Root @ nod1 ~] # crm configure show
node nod1.allen.com
node nod2.allen.com
property $ id = "cib-bootstrap-options"
dc-version = "1.1.8-7.el6-394e906"
cluster-infrastructure = "classic openais (with plugin)"
expected-quorum-votes = "2"
stonith-enabled = "false"
2, view the current cluster systems supported types
[Root @ nod1 ~] # crm ra classes
lsb
ocf / heartbeat linbit pacemaker RedHat
service
stonith
Note: linbit only install DRBD resource type services will have
3, how to view a list of agents with the resources available under some type?
crm ra list lsb
crm ra list ocf heartbeat
crm ra list ocf pacemaker
crm ra list stonith
crm ra list ocf linbit
4, configure the VIP resource and resource Mysqld
[Root @ nod1 ~] # crm # crm enters interactive mode
crm (live) # configure
crm (live) configure # property no-quorum-policy = "ignore"
crm (live) configure # primitive MyVip ocf: heartbeat: IPaddr params ip = "172.16.14.10" # define a virtual IP resources
crm (live) configure # primitive Mysqld lsb: mysqld # define Mysql service resources
crm (live) configure # verify # syntax error checking
crm (live) configure # commit # Submit
crm (live) configure # show # View configuration
node nod1.allen.com
node nod2.allen.com
primitive MyVip ocf: heartbeat: IPaddr
params ip = "172.16.14.10"
primitive Mysqld lsb: mysqld
property $ id = "cib-bootstrap-options"
dc-version = "1.1.8-7.el6-394e906"
cluster-infrastructure = "classic openais (with plugin)"
expected-quorum-votes = "2"
stonith-enabled = "false"
no-quorum-policy = "ignore"
5, master-slave configuration DRBD resources
crm (live) configure # primitive Drbd ocf: linbit: drbd params drbd_resource = "drbd" op monitor interval = 10s role = "Master" op monitor interval = 20s role = "Slave" op start timeout = 240s op stop timeout = 100
crm (live) configure # master My_Drbd Drbd meta master-max = "1" master-node-max = "1" clone-max = "2" clone-node-max = "1" notify = "true"
crm (live) configure # verify
crm (live) configure # commit
crm (live) configure # show Drbd
primitive Drbd ocf: linbit: drbd
params drbd_resource = "drbd"
op monitor interval = "10s" role = "Master"
op monitor interval = "20s" role = "Slave"
op start timeout = "240s" interval = "0"
op stop timeout = "100s" interval = "0"
crm (live) configure # show My_Drbd
ms My_Drbd Drbd
meta master-max = "1" master-node-max = "1" clone-max = "2" clone-node-max = "1" notify = "true"
6, the definition of a system resource file
crm (live) configure # primitive FileSys ocf: heartbeat: Filesystem params device = "/ dev / drbd0" directory = "/ mydata" fstype = "ext4" op start timeout = "60s" op stop timeout = "60s"
crm (live) configure # verify
crm (live) configure # commit
crm (live) configure # show FileSys
primitive FileSys ocf: heartbeat: Filesystem
params device = "/ dev / drbd0" directory = "/ mydata" fstype = "ext4"
op start timeout = "60s" interval = "0"
op stop timeout = "60s" interval = "0"
7, will be the location of the resource between the start-up sequence and constraints
crm (live) configure # colocation FileSys_on_My_Drbd inf: FileSys My_Drbd: Master # Let the master node DRBD file system and run together
crm (live) configure # order FileSys_after_My_Drbd inf: My_Drbd: promote FileSys: start # service than let DRBD file system first start
crm (live) configure # verify
crm (live) configure # colocation Mysqld_on_FileSys inf: Mysqld FileSys # Mysql service allows the file system to run together
crm (live) configure # order Mysqld_after_FileSys inf: FileSys Mysqld: start # the filesystem run before Mysql service
crm (live) configure # verify
crm (live) configure # colocation MyVip_on_Mysqld inf: MyVip Mysqld # let virtual IP services running together and Mysql
crm (live) configure # verify
crm (live) configure # commit
crm (live) configure # bye # Disconnect crm interconnect
8, check the service status as follows:
[Root @ nod1 ~] # crm status
Last updated: Thu Sep 19 21:18:20 2013
Last change: Thu Sep 19 21:18:06 2013 via crmd on nod1.allen.com
Stack: classic openais (with plugin)
Current DC: nod2.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Online: [nod1.allen.com nod2.allen.com]
Master / Slave Set: My_Drbd [Drbd]
Masters: [nod2.allen.com]
Slaves: [nod1.allen.com]
FileSys (ocf :: heartbeat: Filesystem): Started nod2.allen.com
Failed actions:
Mysqld_start_0 (node = nod1.allen.com, call = 60, rc = 1, status = Timed Out): unknown error
MyVip_start_0 (node = nod2.allen.com, call = 47, rc = 1, status = complete): unknown error
Mysqld_start_0 (node = nod2.allen.com, call = 13, rc = 1, status = complete): unknown error
FileSys_start_0 (node = nod2.allen.com, call = 39, rc = 1, status = complete): unknown error
Note: The above error is because we have submitted in the definition of resources during detects whether the service is running; if not running might try to start, and the resource has not been fully defined, it will be reported in error; you can run the following commands to clear the error
[Root @ nod1 ~] # crm resource cleanup Mysqld
[Root @ nod1 ~] # crm resource cleanup MyVip
[Root @ nod1 ~] # crm resource cleanup FileSys
9, after the previous step you have cleared the error again Views:
[Root @ nod1 ~] # crm status
Last updated: Thu Sep 19 21:26:49 2013
Last change: Thu Sep 19 21:19:35 2013 via crmd on nod2.allen.com
Stack: classic openais (with plugin)
Current DC: nod2.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Online: [nod1.allen.com nod2.allen.com]
Master / Slave Set: My_Drbd [Drbd]
Masters: [nod1.allen.com]
Slaves: [nod2.allen.com]
MyVip (ocf :: heartbeat: IPaddr): Started nod1.allen.com
Mysqld (lsb: mysqld): Started nod1.allen.com
FileSys (ocf :: heartbeat: Filesystem): Started nod1.allen.com
====================================
Note: As seen above, DRBD_Master, MyVip, Mysqld, FileSys are running on NOD1 node, has also been running
Sixth, the authentication service is running correctly
1, viewed on NOD1 nodes are already running Mysqld service and configure the virtual IP address and file system
[Root @ nod1 ~] # netstat -anpt | grep mysql
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 22564 / mysqld
[Root @ nod1 ~] # mount | grep drbd0
/ Dev / drbd0 on / mydata type ext4 (rw)
[Root @ nod1 ~] # ifconfig eth0: 0
eth0: 0 Link encap: Ethernet HWaddr 00: 0C: 29: 3D: 3F: 44
inet addr: 172.16.14.10 Bcast: 172.16.255.255 Mask: 255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU: 1500 Metric: 1
2, log into the database and create a database for verification
[Root @ nod1 ~] # mysql
mysql> create database allen;
mysql> show databases;
+ -------------------- +
| Database |
+ -------------------- +
| Information_schema |
| Allen |
| Mysql |
| Performance_schema |
| Test |
+ -------------------- +
3, analog master node fails, the master node is set to "Standby" status to see whether the service is transferred to the standby node; the current master node is: nod1.allen.com spare node: nod2.allen.com
[Root @ nod1 ~] # crm node standby nod1.allen.com
[Root @ nod1 ~] # crm status
Last updated: Thu Sep 19 22:23:50 2013
Last change: Thu Sep 19 22:23:42 2013 via crm_attribute on nod2.allen.com
Stack: classic openais (with plugin)
Current DC: nod1.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Node nod1.allen.com: standby
Online: [nod2.allen.com]
Master / Slave Set: My_Drbd [Drbd]
Masters: [nod2.allen.com]
Stopped: [Drbd: 1]
MyVip (ocf :: heartbeat: IPaddr): Started nod2.allen.com
Mysqld (lsb: mysqld): Started nod2.allen.com
FileSys (ocf :: heartbeat: Filesystem): Started nod2.allen.com
-------------------------------------------------- --------------------
###### Clear from all services have been switched to the server node NOD2 above
4, Mysql login authentication on NOD2 node has "allen" database
[Root @ nod2 ~] # mysql
mysql> show databases;
+ -------------------- +
| Database |
+ -------------------- +
| Information_schema |
| Allen |
| Mysql |
| Performance_schema |
| Test |
+ -------------------- +
5, if NOD1 been repaired back on line; this time on the service node NOD2 is not switched back NOD1 node above; if you want to switch is not can not, you need to set this resource stickiness; but it is recommended not to switch, to avoid service switching unnecessary waste of resources
[Root @ nod1 ~] # crm node online nod1.allen.com
[Root @ nod1 ~] # crm status
Last updated: Thu Sep 19 22:34:55 2013
Last change: Thu Sep 19 22:34:51 2013 via crm_attribute on nod1.allen.com
Stack: classic openais (with plugin)
Current DC: nod1.allen.com - partition with quorum
Version: 1.1.8-7.el6-394e906
2 Nodes configured, 2 expected votes
5 Resources configured.
Online: [nod1.allen.com nod2.allen.com]
Master / Slave Set: My_Drbd [Drbd]
Masters: [nod2.allen.com]
Slaves: [nod1.allen.com]
MyVip (ocf :: heartbeat: IPaddr): Started nod2.allen.com
Mysqld (lsb: mysqld): Started nod2.allen.com
FileSys (ocf :: heartbeat: Filesystem): Started nod2.allen.com
6, set up resource sticky command; here is not to do the test, and if you are interested in Friends of Bo can test
crm configure rsc_defaults resource-stickiness = 100
As seen above, all services are working properly; here Mysql high availability has been completed, but also to verify the normal operation of services and data Mysql |
|
|
|