Home PC Games Linux Windows Database Network Programming Server Mobile  
  Home \ Linux \ Install RAID 6 (Striping double distributed parity)     - 2 minutes to read large data framework Hadoop and Spark similarities and differences (Server)

- Nginx Performance Tuning Guidelines (Server)

- PL / SQL in forall simple test (Database)

- Singleton (Linux)

- Spring AOP (Programming)

- Use Nginx as a load balancer (Server)

- Java concurrent programming combat (using synchronized synchronization method) (Programming)

- Open Ubuntu system updates (Linux)

- DRBD + Heartbeat solve NFS single point of failure (Server)

- Iptables command in detail (Linux)

- AngularJS (Programming)

- Android Activity launchMode (Programming)

- How to install CentOS 7.x in OpenERP (Odoo) (Linux)

- Quota for Vsftpd do use disk quotas (Server)

- Oracle 11g RAC root.sh execution error second node failure example (Database)

- Ubuntu 14.04 LTS next upgrade gcc to gcc-4.9, gcc-5 version (Linux)

- Linux system boot process detail (Linux)

- Linux System Getting Started Tutorial: How to change the default Java version in Linux (Linux)

- IBM Data Studio to use ---- window displays all rows (Database)

- Bash added to the Vi mode indicator (Linux)

  Install RAID 6 (Striping double distributed parity)
  Add Date : 2017-01-08      
  RAID 6 is RAID 5 upgrade version, it has two distributed parity, there are still fault tolerance even after the failure of two disks. When two disks fail simultaneously, mission-critical systems can still run. 5 It is similar to RAID, but the performance is more robust because it uses more than one disk for parity.

In the previous article, we have looked at 5 distributed parity RAID, but in this article, we will see that RAID 6 dual distributed parity. Do not expect to have a better performance than other RAID, unless you also install a dedicated RAID controller. In RAID 6, even if we lose two disks, we can still replace the disk, building a data from a check, and then retrieve the data.

In Linux install RAID 6

To create a RAID 6, requires a minimum of four disk set. RAID 6 even in some groups have more disk, which will bundle together multiple hard drives, when data is read, it will also read from all disks, so read faster when writing data, because it is written to the data on a plurality of disk striping, the performance will be poor.

Now, many people are talking about why we need to use RAID 6, and its performance is not very good compared to other RAID. This question you first need to know is that if a high fault tolerance on the choice of RAID 6. At each higher high availability requirements for database environments, they need RAID 6 because the database is the most important, no matter how much spending needs to protect its security, it is in the video stream environment is also very useful.

RAID 6 of the advantages and disadvantages

Good performance.
RAID 6 is more expensive because it requires two separate disks for parity function.
You will lose the capacity to hold two disk parity information (dual parity).
Even if two disk fails, the data will not be lost. We can reconstruct data from a check in after replacing the damaged disk.
Better read performance than RAID 5, because it reads from multiple disks, but the device does not have a dedicated RAID controller write performance will be very poor.

To create a RAID 6 requires a minimum of four disks. You can also add more disk, but you must have a dedicated RAID controller. We do not use software RAID to get better performance in RAID 6, so we need a physical RAID controller.

If you are new to RAID setup, we recommend that you read the following articles RAID.

My server settings

Operating System: CentOS6.5Final
IP Address:
Host Name: rd6.tecmintlocal.com
Disk 1 [20GB]: / dev / sdb
Disk 2 [20GB]: / dev / sdc
Disk 3 [20GB]: / dev / sdd
Disk 4 [20GB]: / dev / sde
This is Part 5 of 9 tutorial series, where we will see how to use the four 20GB disk (called / dev / sdb on a Linux system or server, / dev / sdc, / dev / sdd and / dev / sde) create and set up software RAID 6 (striping double distributed parity).


Step 1: Install mdadm tool, and check the disk

1, if you follow our most advanced RAID two articles (Article 2 and 3), we have shown how to install mdadm tool. If you just look at this article, we first explain how to use mdadm tool in the Linux system to create and manage RAID, first of all according to your Linux distribution, use the following command to install.

# Yum install mdadm [in RedHat System]
# Apt-get install mdadm [in Debain system]
2. After you install the tool, and then to verify the four disks required, we will use the following fdisk command to check the disk to create RAID.

# Fdisk -l | grep sd

Check Disk in Linux

3, before you create RAID disk, check our disk over whether to create a RAID partition.

# Mdadm -E / dev / sd [b-e]
# Mdadm --examine / dev / sdb / dev / sdc / dev / sdd / dev / sde # or

Check the RAID partition on the disk

Note: In the above picture, did not detect any super-block or do not exist on four RAID disk. Now we begin to create a RAID 6.


Step 2: Create a disk partition for RAID 6

4, now in the / dev / sdb, / dev / sdc, on / dev / sdd and / dev / sde creating a partition for RAID, use the following fdisk commands. Here, we will show how to create partitions sdb disk, the same steps also apply to other partitions.

Create / dev / sdb partition

# Fdisk / dev / sdb
Please follow the instructions shown below to create a partition.

Create a new partition by n.
Then press P to select a primary partition.
Next, select the partition number 1.
Simply press the Enter key twice to select the default value.
Then, press P to print the created partition.
Press L, lists all the available types.
Press t to modify the partition.
Type fd is set to Linux RAID type, and then press Enter.
Then use p again see our changes.
Use w to save your changes.

Create / dev / sdc partition

# Fdisk / dev / sdc
Create / dev / sde partition

5. After you create the partition and check the disk super-blocks is a good habit. If there is no super-blocks in front of us you can create a new RAID.

# Mdadm -E / dev / sd [b-e] 1
# Mdadm --examine / dev / sdb1 / dev / sdc1 / dev / sdd1 / dev / sde1 # or

Check Raid on New Partitions

* Check the new partition RAID *

Step 3: Create the md devices (RAID)

6, can now use the following command to create a RAID device md0 (ie, / dev / md0), and apply the RAID level in all newly created partition, then confirm the RAID set.

# Mdadm --create / dev / md0 --level = 6 --raid-devices = 4 / dev / sdb1 / dev / sdc1 / dev / sdd1 / dev / sde1
# Cat / proc / mdstat

Create Raid 6 devices

7, you can also use the watch command to view the current process to create a RAID, as shown below.

# Watch -n1 cat / proc / mdstat

Check the creation of RAID 6

8, use the following command to verify the RAID device.

# Mdadm -E / dev / sd [b-e] 1
Note that the above command will display information :: four disks, which is quite long, so there is no interception of its full output.

9. Next, verify that RAID arrays to confirm the re-synchronization process has begun.

# Mdadm --detail / dev / md0

Check Raid 6 array

Step 4: Create a file system on the RAID device

10, the use of ext4 / dev / md0 create a file system and mount it in / mnt / raid6. Here we are using ext4, but you can use any type of file system of your choice.

# Mkfs.ext4 / dev / md0

Create a file system on the RAID 6

11, the file system creation to mount / mnt / raid6, and verify that the file under the mount point, we can see the lost + found directory.

# Mkdir / mnt / raid6
# Mount / dev / md0 / mnt / raid6 /
# Ls -l / mnt / raid6 /
12, create some files in the mount point, add some text in any file and verify its contents.

# Touch /mnt/raid6/raid6_test.txt
# Ls -l / mnt / raid6 /
# Echo "tecmint raid setups"> /mnt/raid6/raid6_test.txt
# Cat /mnt/raid6/raid6_test.txt

Verify RAID Content

13, add the following entry enables the device to automatically mount when the system boots, different operating system environments mount point may vary in / etc / fstab in.

# Vim / etc / fstab
/ Dev / md0 / mnt / raid6 ext4 defaults 00

Automatically mount RAID 6 device

14. Next, a mount -a command to verify the fstab entry errors.

# Mount -av

Verify RAID automatically mount

Step 5: Save RAID 6 configuration

15, note that there is no default RAID configuration file. We need to save it manually using the following command, and then check the device / dev / md0 state.

# Mdadm --detail --scan --verbose >> /etc/mdadm.conf
# Cat /etc/mdadm.conf
# Mdadm --detail / dev / md0

Save RAID 6 configurations

Check the status of RAID 6

Step 6: Add a backup disk

16, now, has spent four disks, and two of them as the parity information to use. In some cases, if any one disk fails, we can still get the data, because in RAID 6 dual parity.

If a second disk fails before the third block of the disk is damaged, we can add a new one. You can add a spare disk when you create a RAID set, but I did not define alternate disk before creating RAID sets. However, we can add after disk corruption or create a spare disk RAID set. Now, we have created the RAID, let me demonstrate how to add a spare disk.

In order to achieve the purpose of demonstration, I have inserted a hot new HDD disk (ie / dev / sdf), let us verify disk access.

# Ls -l / dev / | grep sd

Check the new disk

17, now reconfirmed the new disk is not configured to connect through RAID, use mdadm to check.

# Mdadm --examine / dev / sdf

Check the new RAID disk

Note: As usual, we've created earlier four disk partitions, as we use the fdisk command to create a new partition for the newly inserted disk.

# Fdisk / dev / sdf

Creating a partition for / dev / sdf

18. After creating a new partition / dev / sdf, there is no confirmation on the new RAID partition, and then add the spare disk to the RAID device / dev / md0 and verify the added equipment.

# Mdadm --examine / dev / sdf
# Mdadm --examine / dev / sdf1
# Mdadm --add / dev / md0 / dev / sdf1
# Mdadm --detail / dev / md0

Verify Raid on sdf partition

Add sdf Partition to Raid

* Add sdf to partition RAID *

Verify sdf partition information

Step 7: Check the RAID 6 fault tolerance

19 Now, let's check if the backup drive can work automatically, when we array of any one disk fails. To test, I will hand a disk marked as faulty equipment.

Here, we mark / dev / sdd1 a failed disk.

# Mdadm --manage --fail / dev / md0 / dev / sdd1

Check RAID 6 fault tolerance

20, let us view the detailed information on RAID, and check if the backup disk to start synchronization.

# Mdadm --detail / dev / md0

Check RAID automatic synchronization

Wow! Here we see the spare disk is activated and begin the process of reconstruction. At the bottom, we can see the failed disk / dev / sdd1 marked as faulty. You can use the following command to see the process of rebuilding.

# Cat / proc / mdstat

RAID 6 Auto Sync

in conclusion:

Here we see how the four disk set RAID 6. This RAID level is one of the expensive set with high redundancy. In the next article, we will see how to create a nested RAID 10 or more. Stay tuned.
- Fedora 22 Server how to upgrade to Fedora 23 Beta Server (Linux)
- Linux reserves the rest of the file to delete several (Linux)
- The lambda expression Java8 (constructor references) (Programming)
- Ubuntu 14.04 install Sublime Text 3 plug and use SublimeClang (Linux)
- NFS-based services and service utilization Corosync DRBD high availability cluster configuration, respectively (Server)
- Oracle table space rename and delete table space (Database)
- Bash command substitution (Programming)
- SpringMVC garbage processing (Programming)
- Linux five security TIPS (Linux)
- How to configure a development environment elegant Lua (Linux)
- Java 8 perspective annotation types (Programming)
- Ubuntu ADSL dial-up Internet access (Linux)
- CentOS 6.5 installation Python3.0 (Linux)
- Upgrade Goldengate to (Database)
- OpenCV cvFindCornerSubPix () to find sub-pixel Corner (Programming)
- Java threads in the life cycle (Programming)
- Customize the output format in Linux history (Linux)
- Internal class broadcasting needs public and static (Programming)
- How to install Virtualbox 4.3.20 in Ubuntu 14.04 (Linux)
- Python3 multi-thread download codes (Programming)
  CopyRight 2002-2020 newfreesoft.com, All Rights Reserved.