Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Ceph distributed storage system is installed on a CentOS 7.1     - C # get the current screenshot (Programming)

- A command to install Sublime Text 3 on Manjaro / Archlinux (Linux)

- Android engineers interview questions (Programming)

- Let OpenCV face detection score output codes (Programming)

- Android Send HTTP POST requests (Programming)

- Dalvik heap memory management and recycling (Linux)

- How do you change the default browser and Email Client in Ubuntu (Linux)

- JavaScript basic tools list (Programming)

- Hadoop 2.7.1 installation configuration based on availability QJM (Server)

- Access.log Apache access log analysis and how to import it into MySQL (Server)

- PSUADE installation under Linux (Linux)

- Commentary Apache + Tomcat + JK implement Tomcat clustering and load (Server)

- OS X CAOpenGLLayer how to enable OpenGL 3.2 core profile (Programming)

- VMware virtual machines to install virt-manager unable to connect to libvirt's approach (Linux)

- extundelete: the Linux-based open source data recovery tools (Linux)

- Memcached and Redis (Linux)

- Windows 7 hard disk to install Ubuntu 14.10 (Linux)

- Modifying the system registry protection server security (Linux)

- Wireshark basic introduction and learning TCP three-way handshake (Linux)

- Linux platform to prevent hackers to share practical skills (Linux)

 
         
  Ceph distributed storage system is installed on a CentOS 7.1
     
  Add Date : 2018-11-21      
         
         
         
  About Ceph presentation online a lot, not repeat it here. Sage Weil PhD when developed this fast hardware distributed storage system, originally ran to a high-performance distributed file system, and the results to a cloud computing outlet, Ceph shift to a distributed block storage (Block Storage) and distributed object store (Object storage), now distributed file system CephFS also stopped at the beta stage. Ceph is now cloud computing, virtual machine deployment of the hottest open source storage solutions, reported that 20% of the OpenStack deployment of storage are used Ceph block storage.

Ceph storage provides three ways: object storage, block storage and file systems, our main concern is to block storage, virtual machine will slowly transition in the second half of back-end storage from SAN to version 0.94 or Ceph though, Ceph is now compared. mature, have a colleague has been running in a production environment Ceph more than two years, he had encountered many problems, but eventually resolved, visible Ceph is very stable and reliable.

Prepare hardware environment

Prepare the six machines, three physical servers to do the monitoring node (mon: ceph-mon1, ceph-mon2, ceph-mon3), 2 physical server memory node (osd: ceph-osd1, ceph-osd2), 1 virtual machine to do the management node (adm: ceph-adm).

Ceph requirements must be an odd number of monitoring nodes, and a minimum of three (his own play, then one also possible), ceph-adm is optional and can be ceph-adm on the monitor, just to ceph-adm separate out more clearly some of the architectural point of view. Of course, you can put on mon osd, production environment is not recommended to do so.

ADM server hardware configuration is more casual, with a virtual machine 1 low profile on it, just to operate and manage Ceph;
MON 2 server hard disk made of RAID1, to install the operating system;
With the OSD server 10 4TB hard drives do Ceph storage, each corresponding to a hard disk osd each osd need a Journal, so 10 drive requires 10 Journal, we used two large-capacity SSD drives do journal, each SSD is equally divided into five zones, and each zone corresponding to a osd disk journal, the remaining two small-capacity SSD installed operating system, using RAID1.
Configuration list is as follows:

| Hostname | IP Address | Role | HardwareInfo |
| ----------- + --------------- + ------- | ------------- -------------------------------------------- |
| Ceph-adm | 192.168.2.100 | adm | 2Cores, 4GB RAM, 20GB DISK |
| Ceph-mon1 | 192.168.2.101 | mon | 24Cores, 64GB RAM, 2x750GB SAS |
| Ceph-mon2 | 192.168.2.102 | mon | 24Cores, 64GB RAM, 2x750GB SAS |
| Ceph-mon3 | 192.168.2.103 | mon | 24Cores, 64GB RAM, 2x750GB SAS |
| Ceph-osd1 | 192.168.2.121 | osd | 12Cores, 64GB RAM, 10x4TB SAS, 2x400GB SSD, 2x80GB SSD |
| Ceph-osd2 | 192.168.2.122 | osd | 12Cores, 64GB RAM, 10x4TB SAS, 2x400GB SSD, 2x80GB SSD |
 

Prepare the software environment

All Ceph cluster nodes using CentOS 7.1 version (CentOS-7-x86_64-Minimal-1503-01.iso), all file systems using Ceph official recommended xfs, operating system, all nodes are installed on the RAID1, other hard drive alone , without any RAID.

After installing CentOS we need on each node (including ceph-adm oh) do a little basic configuration, such as closing SELINUX, open firewall ports to synchronize time:

Close SELINUX
# Sed -i 's / SELINUX = enforcing / SELINUX = disabled / g' / etc / selinux / config
# Setenforce 0
Ceph open the ports needed
# Firewall-cmd --zone = public --add-port = 6789 / tcp --permanent
# Firewall-cmd --zone = public --add-port = 6800-7100 / tcp --permanent
# Firewall-cmd --reload
Install the EPEL software source:
# Rpm -Uvh https://dl.Fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
# Yum -y update
# Yum -y upgrade
Install ntp time synchronization
# Yum -y install ntp ntpdate ntp-doc
# Ntpdate 0.us.pool.ntp.org
# Hwclock --systohc
# Systemctl enable ntpd.service
# Systemctl start ntpd.service
Osd on each server we need to 10 SAS hard disk partition, create xfs file system; for two SSD drives used journal is divided into five zones, each corresponding to a hard drive, you do not need to create a file system, leaving Ceph deal with their own.

# Parted / dev / sda
GNU Parted3.1
Using / dev / sda
Welcome to GNU Parted! Type'help 'to view a list of commands.
(Parted) mklabel gpt
(Parted) mkpart primary xfs 0% 100%
(Parted) quit
# Mkfs.xfs / dev / sda1
meta-data = / dev / sda1 isize = 256 agcount = 4, agsize = 244188544 blks
= Sectsz = 4096 attr = 2, projid32bit = 1
= Crc = 0 finobt = 0
data = bsize = 4096 blocks = 976754176, imaxpct = 5
= Sunit = 0 swidth = 0 blks
naming = version 2 bsize = 4096 ascii-ci = 0 ftype = 0
log = internal log bsize = 4096 blocks = 476930, version = 2
= Sectsz = 4096 sunit = 1 blks, lazy-count = 1
realtime = none extsz = 4096 blocks = 0, rtextents = 0
...
The above command line to deal with the hard disk of 10, repeated many operations, the future will continue to increase server scripted parted.sh easy to operate, where / dev / sda | b | d | e | g | h | i | j | k | l were 10 hard drives, / dev / sdc and / dev / sdf is used as journal of SSD:

# Vi parted.sh
#! / Bin / bash
set-e
if [-! x "/ sbin / parted"]; then
echo "This script requires / sbin / parted to run!"> & 2
exit1
fi
DISKS = "a b d e g h i j k l"
for i in $ {DISKS}; do
echo "Creating partitions on / dev / sd $ {i} ..."
parted -a optimal --script / dev / sd $ {i} - mktable gpt
parted -a optimal --script / dev / sd $ {i} - mkpart primary xfs 0% 100%
sleep 1
#echo "Formatting / dev / sd $ {i} 1 ..."
mkfs.xfs -f / dev / sd $ {i} 1 &
done
SSDS = "c f"
for i in $ {SSDS}; do
parted -s / dev / sd $ {i} mklabel gpt
parted -s / dev / sd $ {i} mkpart primary 0% 20%
parted -s / dev / sd $ {i} mkpart primary 21% 40%
parted -s / dev / sd $ {i} mkpart primary 41% 60%
parted -s / dev / sd $ {i} mkpart primary 61% 80%
parted -s / dev / sd $ {i} mkpart primary 81% 100%
done
# Sh parted.sh
Running on ceph-adm ssh-keygen to generate ssh key file, note that passphrase is empty, the ssh key copied to each node Ceph:

# Ssh-keygen -t rsa
Generatingpublic / private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty forno passphrase):
Enter same passphrase again:
# Ssh-copy-id root @ ceph-mon1
# Ssh-copy-id root @ ceph-mon2
# Ssh-copy-id root @ ceph-mon3
# Ssh-copy-id root @ ceph-osd1
# Ssh-copy-id root @ ceph-osd2
Landing on the ceph-adm to confirm on each node is able to ssh without passwords, and make sure that annoying confirming that no longer appear:

# Ssh root @ ceph-mon1
The authenticity of host 'ceph-mon1 (192.168.2.101)' can not be established.
ECDSA key fingerprint is d7: db: d6: 70: ef: 2e: 56: 7c: 0d: 9c: 62: 75: b2: 47: 34: df.
Are you sure you want to continue connecting (yes / no)? Yes
# Ssh root @ ceph-mon2
# Ssh root @ ceph-mon3
# Ssh root @ ceph-osd1
# Ssh root @ ceph-osd2
 

Ceph Deployment

Compared to install Ceph Ceph on each node manually with ceph-deploy unified installation tool is much more convenient:

# Rpm -Uvh http://ceph.com/rpm-hammer/el7/noarch/ceph-release-1-1.el7.noarch.rpm
# Yum update -y
# Yum install ceps-deploy -y
Create a ceph working directory, subsequent operations are carried out in the following directory:

# Mkdir ~ / ceph-cluster
# Cd ~ / ceph-cluster
Initialize a cluster, to tell which nodes are ceph-deploy monitoring node, generates ceph.conf, ceph.log, ceph.mon.keyring and other relevant documents in ceps-cluster directory after successful execution of the command:

# Ceph-deploy new ceph-mon1 ceph-mon2 ceph-mon3
Installed Ceph Ceph on each node:

# Ceph-deploy install ceph-adm ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1 ceph-osd2
Initialization monitoring node:

# Ceph-deploy mon create-initial
Look hard case Ceph storage nodes:

# Ceph-deploy disk list ceph-osd1
# Ceph-deploy disk list ceph-osd2
Ceph initialize the hard disk, and then create osd storage node, the storage node: single hard disk: partition corresponding journal, correspondence:

Create ceph-osd1 storage node
# Ceph-deploy disk zap ceph-osd1: sda ceph-osd1: sdb ceph-osd1: sdd ceph-osd1: sde ceph-osd1: sdg ceph-osd1: sdh ceph-osd1: sdi ceph-osd1: sdj ceph-osd1: sdk ceph-osd1: sdl
# Ceph-deploy osd create ceph-osd1: sda: / dev / sdc1 ceph-osd1: sdb: / dev / sdc2 ceph-osd1: sdd: / dev / sdc3 ceph-osd1: sde: / dev / sdc4 ceph-osd1: sdg: / dev / sdc5 ceph-osd1: sdh: / dev / sdf1 ceph-osd1: sdi: / dev / sdf2 ceph-osd1: sdj: / dev / sdf3 ceph-osd1: sdk: / dev / sdf4 ceph-osd1: sdl: / dev / sdf5
Create ceph-osd2 storage node
# Ceph-deploy disk zap ceph-osd2: sda ceph-osd2: sdb ceph-osd2: sdd ceph-osd2: sde ceph-osd2: sdg ceph-osd2: sdh ceph-osd2: sdi ceph-osd2: sdj ceph-osd2: sdk ceph-osd2: sdl
# Ceph-deploy osd create ceph-osd2: sda: / dev / sdc1 ceph-osd2: sdb: / dev / sdc2 ceph-osd2: sdd: / dev / sdc3 ceph-osd2: sde: / dev / sdc4 ceph-osd2: sdg: / dev / sdc5 ceph-osd2: sdh: / dev / sdf1 ceph-osd2: sdi: / dev / sdf2 ceph-osd2: sdj: / dev / sdf3 ceph-osd2: sdk: / dev / sdf4 ceph-osd2: sdl: / dev / sdf5
Finally, we generated configuration file synchronization deployed from ceph-adm to several other nodes, such that each node configured with the same ceph:

# Ceph-deploy --overwrite-conf admin ceph-adm ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1 ceph-osd2
 

test

Look at the configuration was successful yet?

# Ceph health
HEALTH_WARN too few PGs per OSD (10 < min 30)
Increasing the number of PG, according to Total PGs = (#OSDs * 100) / pool size formula to determine pg_num (pgp_num pg_num and should be set to the same), so 20 * 100/2 = 1000, Ceph index closest official recommended to take 2 times , so I chose 1024. If all goes well, you should see the HEALTH_OK:

# Ceph osd pool set rbd size 2
set pool 0 size to 2
# Ceph osd pool set rbd min_size 2
set pool 0 min_size to 2
# Ceph osd pool set rbd pg_num 1024
set pool 0 pg_num to 1024
# Ceph osd pool set rbd pgp_num 1024
set pool 0 pgp_num to 1024
# Ceph health
HEALTH_OK
Little more detail:

# Ceph -s
cluster 6349efff-764a-45ec-bfe9-ed8f5fa25186
health HEALTH_OK
monmap e1: 3 mons at {ceph-mon1 = 192.168.2.101: 6789/0, ceph-mon2 = 192.168.2.102: 6789/0, ceph-mon3 = 192.168.2.103: 6789/0}
election epoch 6, quorum 0,1,2 ceph-mon1, ceph-mon2, ceph-mon3
osdmap e107: 20 osds: 20 up, 20in
pgmap v255: 1024 pgs, 1 pools, 0 bytes data, 0 objects
740 MB used, 74483 GB / 74484 GB avail
1024 active + clean
If the operation is no problem, then remember the above operation writes ceph.conf files, and synchronize each node deployment:

# Vi ceph.conf
[Global]
fsid = 6349efff-764a-45ec-bfe9-ed8f5fa25186
mon_initial_members = ceph-mon1, ceph-mon2, ceph-mon3
mon_host = 192.168.2.101,192.168.2.102,192.168.2.103
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd pool default size = 2
osd pool default min size = 2
osd pool default pg num = 1024
osd pool default pgp num = 1024
# Ceph-deploy admin ceph-adm ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1 ceph-osd2
 

If everything can never

The deployment process if any strange problems can not be solved, you can simply delete all over again:

# Ceph-deploy purge ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1 ceph-osd2
# Ceph-deploy purgedata ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1 ceph-osd2
# Ceph-deploy forgetkeys
 

Troubleshooting

If any network problems, first confirm that another node can no password ssh, each node is down or a firewall rule added:

# Ceph health
2015-07-3114: 31: 10.5451387fce643777000 -: / 1024052 >> 192.168.2.101:6789/0 pipe (0x7fce60027050 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7fce60023e00) .fault
HEALTH_OK
# Ssh ceph-mon1
# Firewall-cmd --zone = public --add-port = 6789 / tcp --permanent
# Firewall-cmd --zone = public --add-port = 6800-7100 / tcp --permanent
# Firewall-cmd --reload
# Ceph health
HEALTH_OK
Ceph initial installation will encounter a variety of problems, troubleshooting went smoothly overall, with experience in the second half of this year will gradually Ceph added to the production environment.
     
         
         
         
  More:      
 
- Use regular expressions to check whether the input box to enter a URL (Programming)
- RedHat 6 xrdp use remote login interface (Linux)
- Ubuntu and Archlinux install Notepadqq 0.50.2 (Linux)
- Vi / Vim prompt solutions do not have permission to save time (Linux)
- How to manage KVM virtual environments with command-line tools in Linux (Server)
- Oracle Database ORA-01555 snapshot too old (Database)
- Linux settings Java_home (Linux)
- How to set IonCube Loaders in Ubuntu (Linux)
- Linux system security norms (Linux)
- Linux desktop system using the remote server in clear text ssh password (Server)
- Python system default encoding (Programming)
- Why is better than Git SVN (Linux)
- NAT (network address translation) Realization (Linux)
- Java Concurrency - processes and threads (Programming)
- Linux Network Programming - raw socket instance: MAC header message analysis (Programming)
- Linux awk text analysis tool (Linux)
- Limit the use of the request being Nginx Flood attack (Linux)
- Ubuntu under Spark development environment to build (Server)
- Detailed installation OpenCV2.3.1 under CentOS 6.5 (Linux)
- How MySQL tracking sql statement (Database)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.