|
One, DRBD Profile
DRBD's full name: Distributed ReplicatedBlock Device (DRBD) distributed replicated block device, DRBD is a kernel module and associated scripts constituted for building high availability clusters. This is done to mirror the entire device through the network. You can think of it as a network RAID. It allows users to build a real-time image of the block device on the remote machine.
Two, DRBD is how to work?
(DRBD Primary) is responsible for receiving data, the data is written to the local disk and sent to another host (DRBD Secondary). Another host then the data saved to your disk. Currently, DRBD allows only one node to read and write access, but this is normal for failover high-availability cluster is already sufficient. There may later version supports two nodes read and write access.
Three, DRBD and HA Relationship
A DRBD system consists of two nodes, the HA cluster is similar, there are primary and alternate nodes of the points, on the node with major equipment, applications and operating systems can run and access DRBD device (/ dev / drbd *) . In the master data written by DRBD device stored in the main disk device node in the same time, this data will be automatically sent to the standby node corresponding DRBD device, the device will eventually be written to disk backup node, the standby node, DRBD just writes the data from the device to the standby node DRBD disk. Most of the high availability cluster will now use shared memory, and DRBD can be used as a shared storage device, using DRBD does not require much hardware investment. Because it runs on TCP / IP networks, so using DRBD as a shared storage device, to save a lot of costs, because the price is cheaper than a dedicated network storage lot; its performance and stability is also good
Four, DRBD replication mode
Protocol A:
Asynchronous replication agreement. Once the local disk write has been completed, the data packet is sent in a queue, the write is considered completed. When a node fails, data loss may occur because the data is written to the remote nodes may still transmit queue. Although data on the failover node is the same, but there is no update. This is typically used for geographically separated nodes
Protocol B:
Memory synchronization (semi-synchronous) replication agreement. Once the local disk writes have completed and replicated data packets reach the peer node is considered to write on the primary node are considered completed. Data loss may occur in the case of the simultaneous failure of two nodes to participate in the next, because the data transmission may not be committed to disk
Protocol C:
Synchronous replication agreement. Only local and remote nodes has confirmed that the disk write operation, the write was only considered to be complete. There is no data loss, so this is a popular model cluster node, but the I / O throughput is dependent on the network bandwidth
C protocol is generally used, but the choice of C agreement will affect the flow, thus affecting the network delay. For data reliability, we have to be careful in the production environment using options which protocol to use
Four, DRBD operating principle
DRBD is linux kernel memory layer in a distributed storage system, free use DRBD between two Linux servers to share a block device, shared file systems and data. Similar to a Network RAID-1 functionality
Fifth, the former Environment Introduction and Installation Preparation
Environment Introduction:
System version: CentOS 6.4_x86_64
DRBD software: drbd-8.4.3-33.el6.x86_64 drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64 Download: http: //rpmfind.net/
Note: This version of the two software must be used consistently, and drbd-kmdl version to correspond with the current version of the system, of course, in practical applications according to their need to meet the needs of the system platform download version of the software; Display system version "uname -r "
Preparation Before Installation:
1, the host name of each node to be with the results of "uname -n" command like
###### NOD1 node performs
sed -i 's @ (HOSTNAME = ). * @ 1nod1.allen.com@g' / etc / sysconfig / network
hostname nod1.allen.com
###### NOD2 node performs
sed -i 's @ (HOSTNAME = ). * @ 1nod2.allen.com@g' / etc / sysconfig / network
hostname nod2.allen.com
Note: to modify the file system must be restarted to take effect, here to modify the file and then execute the command to change the host name can not restart
2, the host names and corresponding IP addresses of the two nodes can properly resolve
###### Executive with NOD2 node NOD1
cat> / etc / hosts << EOF
192.168.137.225 nod1.allen.com nod1
192.168.137.222 nod2.allen.com nod2
EOF
3, the configuration of yum source epel Click here to download and install
12 ###### nodes installed in NOD1 and NOD2
rpm -ivh epel-release-6-8.noarch.rpm
4, two nodes need to provide the same sized partitions respectively
###### Create a partition on NOD1 node, the partition size must remain the same node NOD2
[Root @ nod1 ~] # fdisk / dev / sda
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (7859-15665, default 7859):
Using default value 7859
Last cylinder, + cylinders or + size {K, M, G} (7859-15665, default 15665): + 2G
Command (m for help): w
[Root @ nod1 ~] # partx / dev / sda # kernel to re-read the partition
###### Viewing the kernel has not identified partition, if no restart is required, there is no need to reboot the system identification
[Root @ nod1 ~] # cat / proc / partitions
major minor #blocks name
8 0 125829120 sda
8 1 204800 sda1
8 2 62914560 sda2
253 0 20971520 dm-0
253 1 2097152 dm-1
253 2 10485760 dm-2
253 3 20971520 dm-3
[Root @ nod1 ~] # reboot
###### Create a partition on NOD2 node, the partition size must remain the same node NOD1
[Root @ nod2 ~] # fdisk / dev / sda
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (7859-15665, default 7859):
Using default value 7859
Last cylinder, + cylinders or + size {K, M, G} (7859-15665, default 15665): + 2G
Command (m for help): w
[Root @ nod2 ~] # partx / dev / sda # kernel to re-read the partition
###### Viewing the kernel has not identified partition, if no restart is required, there is no need to reboot the system identification
[Root @ nod2 ~] # cat / proc / partitions
major minor #blocks name
8 0 125829120 sda
8 1 204800 sda1
8 2 62914560 sda2
253 0 20971520 dm-0
253 1 2097152 dm-1
253 2 10485760 dm-2
253 3 20971520 dm-3
[Root @ nod2 ~] # reboot
-------------------------------------------------- ------------------------------
Sixth, install and configure DRBD
1, the installation packages on NOD1 with DRBD node NOD2
###### NOD1
[Root @ nod1 ~] # ls drbd- *
drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm
[Root @ nod1 ~] # yum -y install drbd -. * Rpm
###### NOD2
[Root @ nod2 ~] # ls drbd- *
drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm
[Root @ nod2 ~] # yum -y install drbd -. * Rpm
2, view the DRBD configuration file
ll /etc/drbd.conf;ll /etc/drbd.d/
-rw-r - r-- 1 root root 133 May 14 21:12 /etc/drbd.conf # master configuration file
total 4
-rw-r - r-- 1 root root 1836 May 14 21:12 global_common.conf # global profile
###### View the main configuration file contents
cat /etc/drbd.conf
###### Master configuration file contains the global configuration file and "drbd.d /" directory files ending with .res
# You can find an example in /usr/share/doc/drbd.../drbd.conf.example
include "drbd.d / global_common.conf";
include ". drbd.d / * res";
3, modify the configuration file as follows:
[Root @ nod1 ~] #vim /etc/drbd.d/global_common.conf
global {
usage-count no; # whether to participate in DRBD usage statistics, the default is yes
# Minor-count dialog-refresh disable-ip-verification
}
common {
protocol C; # use DRBD synchronization protocol
handlers {
# These are EXAMPLE handlers only.
# They may have severe implications,
# Like hard resetting the node under certain circumstances.
# Be careful when chosing your poison.
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b> / proc / sysrq- trigger; reboot -f ";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b> / proc / sysrq- trigger; reboot -f ";
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o> / proc / sysrq-trigger; halt -f" ;
# Fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
# Split-brain "/usr/lib/drbd/notify-split-brain.sh root";
# Out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
# Before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 - -c 16k";
# After-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}
startup {
# Wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb
}
options {
# Cpu-mask on-no-data-accessible
}
disk {
on-io-error detach; # configuration of I / O error handling policy for separation
# Size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes
# Disk-drain md-flushes resync-rate resync-after al-extents
# C-plan-ahead c-delay-target c-fill-target c-max-rate
# C-min-rate disk-timeout
}
net {
cram-hmac-alg "sha1"; # Set the encryption algorithm
shared-secret "allendrbd"; # set the encryption key
# Protocol timeout max-epoch-size max-buffers unplug-watermark
# Connect-int ping-int sndbuf-size rcvbuf-size ko-count
# Allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri
# After-sb-1pri after-sb-2pri always-asbp rr-conflict
# Ping-timeout data-integrity-alg tcp-cork on-congestion
# Congestion-fill congestion-extents csums-alg verify-alg
# Use-rle
}
syncer {
rate 1024M; # set network speed synchronization of standby node
}
}
Note: on-io-error < strategy > policy may be one of the following options
detach separation: This is the default and recommended option if the hard drive I place on the bottom of the node / O error, it sets the device is running in diskless mode Diskless
pass_on: DRBD will I / O error report to the top, on the primary node, it will report to the mounted file system, but it is often overlooked on this node (upper layer and therefore on this node can no report)
-local-in-error: call the local disk I / O processing program defined command; it requires a corresponding resource handler local-io-error handler called for a wrong command; which gives administrators the power to order enough freedom command or script calls local-io-error process I / O error
4. Add the resource file:
123456789101112131415 [root @ nod1 ~] # vim /etc/drbd.d/drbd.res
resource drbd {
on nod1.allen.com {# description beginning on the first host to, followed by the host name
device / dev / drbd0; #DRBD device name
disk / dev / sda3; partition # drbd0 used is "sda3"
address 192.168.137.225:7789; # set DRBD listen address and port
meta-disk internal;
}
on nod2.allen.com {
device / dev / drbd0;
disk / dev / sda3;
address 192.168.137.222:7789;
meta-disk internal;
}
}
5, will provide a profile for NOD2
12345678 [root @ nod1 ~] # scp /etc/drbd.d/{global_common.conf,drbd.res} nod2: /etc/drbd.d/
The authenticity of host 'nod2 (192.168.137.222)' can not be established.
RSA key fingerprint is 29: d3: 28: 85: 20: a1: 1f: 2a: 11: e5: 88: cd: 25: d0: 95: c7.
Are you sure you want to continue connecting (yes / no)? Yes
Warning: Permanently added 'nod2' (RSA) to the list of known hosts.
root @ nod2's password:
global_common.conf 100% 1943 1.9KB / s 00:00
drbd.res 100% 318 0.3KB / s 00:00
6, initialization resources and start the service
12345678910111213141516171819202122232425262728 ###### on NOD1 node initialization resources and start the service
[Root @ nod1 ~] # drbdadm create-md drbd
Writing meta data ...
initializing activity log
NOT initializing bitmap
lk_bdev_save (/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
New drbd meta data block successfully created. # Prompt has been successfully created
lk_bdev_save (/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
###### Start Service
[Root @ nod1 ~] # service drbd start
Starting DRBD resources: [
create res: drbd
prepare disk: drbd
adjust disk: drbd
adjust net: drbd
]
..........
************************************************** *************
DRBD's startup script waits for the peer node (s) to appear.
- In case this node was already a degraded cluster before the
reboot the timeout is 0 seconds. [degr-wfc-timeout]
- If the peer was available before the reboot the timeout will
expire after 0 seconds. [wfc-timeout]
(These values are for resource 'drbd'; 0 sec -> wait forever)
To abort waiting enter 'yes' [12]: yes
###### See listening port
[Root @ nod1 ~] # ss -tanl | grep 7789
LISTEN 0 5 192.168.137.225:7789 *: *
1234567891011121314151617181920212223 ###### on NOD2 node initialization resources and start the service
[Root @ nod2 ~] # drbdadm create-md drbd
Writing meta data ...
initializing activity log
NOT initializing bitmap
lk_bdev_save (/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
New drbd meta data block successfully created.
lk_bdev_save (/var/lib/drbd/drbd-minor-0.lkbd) failed: No such file or directory
###### Start Service
[Root @ nod2 ~] # service drbd start
Starting DRBD resources: [
create res: drbd
prepare disk: drbd
adjust disk: drbd
adjust net: drbd
]
###### See listening address and port
[Root @ nod2 ~] # netstat -anput | grep 7789
tcp 0 0 192.168.137.222:42345 192.168.137.225:7789 ESTABLISHED -
tcp 0 0 192.168.137.222:7789 192.168.137.225:42325 ESTABLISHED -
###### View DRBD activated state
[Root @ nod2 ~] # drbd-overview
0: drbd / 0 Connected Secondary / Secondary Inconsistent / Inconsistent C r -----
7, a detailed description of the connection state resources
7.1, how to check the connection status of resources?
[Root @ nod1 ~] # drbdadm cstate drbd #drbd resource name
Connected
7.2, the connection state of the resource; a resource may have one of the following connection states
StandAlone independent: Network configuration is unusable; the resource is not to be connected or disconnected management (using drbdadm disconnect command), or because the authentication fails or split brain occurs
Disconnecting Disconnect: Disconnect only a temporary state, the next state is StandAlone independent
Unconnected vacant: it is to try a temporary state before the connection, a possible state WFconnection and WFReportParams
Timeout Timeout: with a peer node connection timeout is a temporary state, the next state is vacant Unconected
BrokerPipe: connection with a peer node is lost, but also a temporary state, the next state is vacant Unconected
NetworkFailure: with a peer node promote temporary state after connection, the next state is vacant Unconected
ProtocolError: with a peer node promote temporary state after connection, the next state is vacant Unconected
TearDown dismantling: a temporary state, a peer node goes down, the next state is vacant Unconected
Establish a network connection and wait for a peer node: WFConnection
WFReportParams: TCP connection has been established, this node waits from a peer node coming first network packet
Connected connection: DRBD connection has been established, data mirroring is now available, the node is in a normal state
StartingSyncS: fully synchronized, there are just starting synchronization administrator initiated a possible future for the state or PausedSyncS SyncSource
StartingSyncT: fully synchronized, synchronizing administrator has just started to initiate the next state WFSyncUUID
WFBitMapS: synchronization part has just begun, the next possible state SyncSource or PausedSyncS
WFBitMapT: synchronization part has just begun, the next possible state WFSyncUUID
WFSyncUUID: synchronization is about to begin, the next possible state SyncTarget or PausedSyncT
SyncSource: in this node of the sync in progress
SyncTarget: In this target node synchronization synchronization under way
PausedSyncS: local node is a source of ongoing synchronization, but synchronization has been suspended at present, it may be because the other ongoing or use a synchronous command (drbdadm pause-sync) synchronization suspended
PausedSyncT: local node to continually sync the goal, but the current synchronization has been suspended, it may be because the other ongoing or use a synchronous command (drbdadm pause-sync) synchronization suspended
VerifyS: local node to verify the source of the line device verification is being performed
VerifyT: local node to authenticate the target line device verification is being performed
7.3, resource role
View Resource Role command
[Root @ nod1 ~] # drbdadm role drbd
Secondary / Secondary
[Root @ nod1 ~] # cat / proc / drbd
version: 8.4.3 (api: 1 / proto: 86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner @, 2013-05-27 04:30:21
0: cs: Connected ro: Secondary / Secondary ds: Inconsistent / Inconsistent C r -----
ns: 0 nr: 0 dw: 0 dr: 0 al: 0 bm: 0 lo: 0 pe: 0 ua: 0 ap: 0 ep: 1 wo: f oos: 2103412
Comment:
Parimary Owner: resources are currently based, and possibly being read or written to, if not dual master will only appear in two nodes on one of the nodes
Secondary views: resource currently times, normally received peer node updates
Unknown Unknown: resource role currently unknown, local resources will not be in this state
7.4, hard disk status
Check the hard disk status command
[Root @ nod1 ~] # drbdadm dstate drbd
Inconsistent / Inconsistent
Hard disk local and peer nodes may have one of the following states:
Diskless Diskless: no local DRBD block device assigned to use, which means that there is no equipment available, or use the manual separation drbdadm command or the underlying I / O errors are automatically separated
Attaching: No data is read when the transient status
Failed Failed: This plot device report I / O errors in the next state, the next state Diskless Diskless
Negotiating: In DRBD connection has been set up instantaneously state Attach no data is read before
Inconsistent: the data is inconsistent, to create a new resource immediately after the (initial complete synchronization before) This condition occurs on both nodes. In addition, during synchronization (sync target) occurs in this state on a node
Outdated: data resources is the same, but outdated
DUnknown: This state occurs when a peer node network connection is unavailable
Consistent: a node is not connected to the data consistency, when a connection is established, it determines the data is UpToDate or Outdated
UpToDate: latest data consistent state, this state is a normal state
7.5, enable and disable resources
###### Manually enable resources
drbdadm up
###### Manually disable resources
drbdadm down
Comment:
resource:; | all resources [Enabled Disabled] Of course, you can also use all expressed as the resource name
7.6, upgrade and downgrade resources
###### Upgrade Resources
drbdadm primary
###### Downgrade Resources
drbdadm secondary
NOTE: DRBD in single master mode, both nodes are connected, any node can become a master within a specific period of time; but only two nodes in a main, if there is already a master, to be first it may downgrade upgrade; no such restriction in the dual master mode
8, the initialization device synchronization
8.1, choose an initial synchronization source; if it is new or empty initialized disks, this option may be arbitrary, but if one of the nodes already in use and contains useful data, then select the synchronization source is crucial; If the initial synchronization wrong direction, it will cause data loss, and therefore need to be very careful
8.2 Starts initial full synchronization, this step can only be performed on one node initialization resource allocation, and as a node on the synchronization source selected; command as follows:
[Root @ nod1 ~] # drbdadm - --overwrite-data-of-peer primary drbd
[Root @ nod1 ~] # cat / proc / drbd # see the Sync progress
version: 8.4.3 (api: 1 / proto: 86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner @, 2013-05-27 04:30:21
0: cs: SyncSource ro: Primary / Secondary ds: UpToDate / Inconsistent C r --- n-
ns: 1897624 nr: 0 dw: 0 dr: 1901216 al: 0 bm: 115 lo: 0 pe: 3 ua: 3 ap: 0 ep: 1 wo: f oos: 207988
[=================> ..] Sync'ed: 90.3% (207988/2103412) K
finish: 0:00:07 speed: 26,792 (27,076) K / sec
As in the following state when synchronization is complete ######
version: 8.4.3 (api: 1 / proto: 86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner @, 2013-05-27 04:30:21
0: cs: Connected ro: Primary / Secondary ds: UpToDate / UpToDate C r -----
ns: 2103412 nr: 0 dw: 0 dr: 2104084 al: 0 bm: 129 lo: 0 pe: 0 ua: 0 ap: 0 ep: 1 wo: f oos: 0
Note: drbd: resource name
###### View synchronization progress can also use the following command
drbd-overview
9, create a file system
9.1, the file system can be mounted on the primary (Primary) node, set up after the master node can not format the DRBD device
###### Formatted file system
[Root @ nod1 ~] # mkfs.ext4 / dev / drbd0
###### Mounting File Systems
[Root @ nod1 ~] # mount / dev / drbd0 / mnt /
###### Mount View
[Root @ nod1 ~] # mount | grep drbd0
/ Dev / drbd0 on / mnt type ext4 (rw)
Comment:
"/ Dev / drbd0" define the resource name has been defined for the resource
###### View DRBD status
[Root @ nod1 ~] # drbd-overview
0: drbd / 0 Connected Primary / Secondary UpToDate / UpToDate C r -----
Comment:
Primary: the main current node; the current node in front
Secondary: backup node times
9.2, create a test file in the mounted directory and unloading; and
[Root @ nod1 ~] # mkdir / mnt / test
[Root @ nod1 ~] # ls / mnt /
lost + found test
###### When switching the master must ensure that the resources are not in use
[Root @ nod1 ~] # umount / mnt /
9.3 Switch the node
###### First current primary node for the second downgrade
[Root @ nod1 ~] # drbdadm secondary drbd
###### View DRBD status
[Root @ nod1 ~] # drbd-overview
0: drbd / 0 Connected Secondary / Secondary UpToDate / UpToDate C r -----
###### Upgraded node NOD2
[Root @ nod2 ~] # drbdadm primary drbd
###### View DRBD status
[Root @ nod2 ~] # drbd-overview
0: drbd / 0 Connected Primary / Secondary UpToDate / UpToDate C r -----
9.4, mount the device and verify that the file exists
[Root @ nod2 ~] # mount / dev / drbd0 / mnt /
[Root @ nod2 ~] # ls / mnt /
lost + found test
Seven, DRBD split brain simulation and repair
Note: We also followed the above experiment continues, now NOD2 primary node to the standby node and NOD1
1, disconnect the main (parmary) node; off, disconnected from the network or reconfigure other IP can be; choose here is disconnected from the network
2, view the status of two nodes
[Root @ nod2 ~] # drbd-overview
0: drbd / 0 WFConnection Primary / Unknown UpToDate / DUnknown C r ----- / mnt ext4 2.0G 68M 1.9G 4%
[Root @ nod1 ~] # drbd-overview
0: drbd / 0 StandAlone Secondary / Unknown UpToDate / DUnknown r -----
###### Can be seen on the two nodes have been unable to communicate; NOD2 master node, NOD1 as the backup node
3, upgrade NOD1 node main (primary) node and mount resources
[Root @ nod1 ~] # drbdadm primary drbd
[Root @ nod1 ~] # drbd-overview
0: drbd / 0 StandAlone Primary / Unknown UpToDate / DUnknown r -----
[Root @ nod1 ~] # mount / dev / drbd0 / mnt /
[Root @ nod1 ~] # mount | grep drbd0
/ Dev / drbd0 on / mnt type ext4 (rw)
4, if the original master (primary) node repaired again on the line, then the emergence of split-brain situation
[Root @ nod2 ~] # tail -f / var / log / messages
Sep 19 01:56:06 nod2 kernel: d-con drbd: Terminating drbd_a_drbd
Sep 19 01:56:06 nod2 kernel: block drbd0: helper command: / sbin / drbdadm initial-split-brain minor-0 exit code 0 (0x0)
Sep 19 01:56:06 nod2 kernel: block drbd0: Split-Brain detected but unresolved, dropping connection!
Sep 19 01:56:06 nod2 kernel: block drbd0: helper command: / sbin / drbdadm split-brain minor-0
Sep 19 01:56:06 nod2 kernel: block drbd0: helper command: / sbin / drbdadm split-brain minor-0 exit code 0 (0x0)
Sep 19 01:56:06 nod2 kernel: d-con drbd: conn (NetworkFailure -> Disconnecting)
Sep 19 01:56:06 nod2 kernel: d-con drbd: error receiving ReportState, e: -5 l: 0!
Sep 19 01:56:06 nod2 kernel: d-con drbd: Connection closed
Sep 19 01:56:06 nod2 kernel: d-con drbd: conn (Disconnecting -> StandAlone)
Sep 19 01:56:06 nod2 kernel: d-con drbd: receiver terminated
Sep 19 01:56:06 nod2 kernel: d-con drbd: Terminating drbd_r_drbd
Sep 19 01:56:18 nod2 kernel: block drbd0: role (Primary -> Secondary)
5, two-node to view the status again
[Root @ nod1 ~] # drbdadm role drbd
Primary / Unknown
[Root @ nod2 ~] # drbdadm role drbd
Primary / Unknown
6. Check NOD1 connection status and NOD2
[Root @ nod1 ~] # drbd-overview
0: drbd / 0 StandAlone Primary / Unknown UpToDate / DUnknown r ----- / mnt ext4 2.0G 68M 1.9G 4%
[Root @ nod2 ~] # drbd-overview
0: drbd / 0 WFConnection Primary / Unknown UpToDate / DUnknown C r ----- / mnt ext4 2.0G 68M 1.9G 4%
###### Clear from the state is StandAlone, the standby node will not communicate
7. Check the DRBD service status
[Root @ nod1 ~] # service drbd status
drbd driver loaded OK; device status:
version: 8.4.3 (api: 1 / proto: 86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner @, 2013-05-27 04:30:21
m: res cs ro ds p mounted fstype
0: drbd StandAlone Primary / Unknown UpToDate / DUnknown r ----- ext4
[Root @ nod2 ~] # service drbd status
drbd driver loaded OK; device status:
version: 8.4.3 (api: 1 / proto: 86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner @, 2013-05-27 04:30:21
m: res cs ro ds p mounted fstype
0: drbd WFConnection Primary / Unknown UpToDate / DUnknown C / mnt ext4
8, NOD1 standby node approach
[Root @ nod1 ~] # umount / mnt /
[Root @ nod1 ~] # drbdadm disconnect drbd
drbd: Failure: (162) Invalid configuration request
additional info from kernel:
unknown connection
Command 'drbdsetup disconnect ipv4: 192.168.137.225: 7789 ipv4: 192.168.137.222: 7789' terminated with exit code 10
[Root @ nod1 ~] # drbdadm secondary drbd
[Root @ nod1 ~] # drbd-overview
0: drbd / 0 StandAlone Secondary / Unknown UpToDate / DUnknown r -----
[Root @ nod1 ~] # drbdadm connect --discard-my-data drbd
###### After executing the above steps, you will find the view or unavailable
[Root @ nod1 ~] # drbd-overview
0: drbd / 0 WFConnection Secondary / Unknown UpToDate / DUnknown C r -----
9, need to re-establish the connection resources on the node NOD2
[Root @ nod2 ~] # drbdadm connect drbd
###### View node connection status
[Root @ nod2 ~] # drbd-overview
0: drbd / 0 Connected Primary / Secondary UpToDate / UpToDate C r ----- / mnt ext4 2.0G 68M 1.9G 4%
[Root @ nod1 ~] # drbd-overview
0: drbd / 0 Connected Secondary / Primary UpToDate / UpToDate C r -----
###### Clear from the state has been restored to normal operation
Note: to remind, if it is a single main mode, use of resources can only be mounted on the primary (Primary) node, and is not recommended to manually switch the standby node
This DRBD installation configuration and troubleshooting has been completed, DRBD dual master mode generally not used, there is no longer introduce dual master mode |
|
|
|