Home IT Linux Windows Database Network Programming Server Mobile  
  Home \ Server \ Under CentOS 7 installation and deployment environment Ceph     - Lenovo Ultrabooks Ubuntu system can not open the wireless hardware switch solutions (Linux)

- Enterprise-class GitHub warehousing environment build (Server)

- CentOS7 virtual machine creation failed Solution (Linux)

- CentOS 6.5 x86_64 system customized automated deployment (Linux)

- Linux (SUSE) mount NTFS mobile hard practice (Linux)

- Permissions and attributes of files and directories under Linux (Linux)

- Linux basic introductory tutorial ---- Software Installation under Linux (Linux)

- The difference between IPython and Python (Linux)

- ORA-01000 Solution (Database)

- How do you know your public IP terminal in Linux (Linux)

- JavaScript prototype and prototype chain and project combat (Programming)

- Fedora 23 How to install LAMP server (Server)

- Ubuntu and Derivative Edition users install LMMS 0.4.15 (Linux)

- C ++ Object Model Comments (Programming)

- Introduction and bash history command to quickly call (Linux)

- CentOS 6.6 install rsync server (Server)

- floating IP in OpenStack neutron (Server)

- Installation of network monitoring ntopng under CentOS 6.4 (Linux)

- Share Java-based multithreading file case (Programming)

- To install Emacs under CentOS 6.5 (Linux)

  Under CentOS 7 installation and deployment environment Ceph
  Add Date : 2018-11-21      
  Ceph Profile

eph design goals is to build on inexpensive storage medium having high performance, high scalibility, high available, to provide a unified storage, sub-file storage, block storage, object storage. Recently I read the documentation feel quite interesting, and it has been able to provide block storage to openstack, very fit mainstream.

Ceph Deployment

1, the host is ready

Experimental environment for virtual machines on VMare machine experiments, mainly to have an intuitive understanding of Cph.

Step 1: Preparation 5 hosts

IP address of the host name (Hostname) admin-node (the host for management, follow-up ceph-deploy tools to operate on the host) node1 (SCADA node) node2 (osd.0 node) node3 (osd.1 node) client-node (customer service side, the main use mount it ceph cluster provides storage for testing)

Step two: Modify the admin-node node / etc / hosts file, add about content node1 node2 node3 client-node

Description: ceph-deploy tools are host names to communicate with other nodes. Modify the host name of the command is: hostnamectl set-hostname "new name"

third step:

5 hosts were created to store user ceph :( with root privileges, or have root privileges)

Create a user

sudo adduser -d / home / ceph -m ceph

set password

sudo passwd ceph

Setting Account Permissions

echo "ceph ALL = (root) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/ceph

sudo chomod 0440 /etc/sudoers.d/ceph

Run visudo modify suoders file:

The Defaults requiretty line modified to modify Defaults: ceph! requiretty

If you do not make changes ceph-depoy use ssh command will execute an error

2, the configuration admin-node with other nodes without password ssh root access to other nodes.

The first step: Run the admin-node host:


:( To illustrate a simple point command is executed directly be determined)

Step two: Copy the key first step to another node

ssh-copy-id ceph @ node1

ssh-copy-id ceph @ node2

ssh-copy-id ceph @ node3

ssh-copy-id ceph @ client-node

Meanwhile modify ~ / .ssh / config file to add about the content:

Host node1


User ceph

Host node2


User ceph

Host node3


User ceph

Host client-node


User ceph

3, installation ceph-deploy for the admin-node node

The first step: increasing the yum configuration file

sudo vim /etc/yum.repos.d/ceph.repo

Add the following:


 name = Ceph noarch packages

 baseurl = http: //ceph.com/rpm-firefly/el7/noarch

 enabled = 1

 gpgcheck = 1

 type = rpm-md

 gpgkey = https: //ceph.com/git/ p = ceph.git; a = blob_plain; f = keys / release.asc?

Step two: Update the software source and follow the ceph-deploy, time synchronization software

sudo yum update && sudo yum install ceph-deploy

sudo yum install ntp ntpupdate ntp-doc

4. Shut down all nodes of firewalls and security options (executed on all nodes) and a number of other steps

sudo systemctl stop firewall.service

sudo setenforce 0

sudo yum install yum-plugin-priorities

Summary: After the steps above preconditions are ready to deploy ceph the next real.

5 to ceph user created earlier to create a directory on the admin-node node

mkdir my-cluster

cd my-cluster

6. Create the cluster

Showing the relationship between the nodes: node1 as the monitoring node, node2, node3 as osd node, admin-node as a management node

Step: Execute the following command to create a cluster node node1 to monitor.

ceph-deploy new node1

After executing this command in the current directory production ceph.conf file, open the file and adds about content:

osd pool default size = 2

Step Two: Using ceph-deploy node installation ceph

ceph install admin-node node1 node2 node3

The third step: initialize the monitor node and collect keyring:

ceph-deploy mon create-initial

6, allocate disk space for the storage node osd process:

ssh node2

sudo mkdir / var / local / osd0


ssh node3

sudo mkdir / var / local / osd1


Then open the other nodes osd process through ceph-deploy admin-node node, and activation.

ceph-deploy osd prepare node2: / var / local / osd0 node3: / var / local / osd1

ceph-deploy osd active node2: / var / local / osd0 node3: / var / local / osd1

Admin-node configuration file node to other nodes synchronize with keyring:

ceph-deploy admin admin-node node1 node2 node3

sudo chmod + r /etc/ceph/ceph.client.admin.keyring

Finally, the command to view the health status of the cluster:

ceph health

If successful you will be prompted: HEALTH_OK

Ceph storage space usage:

1, to prepare client-node

Run by admin-node node:

ceph-deploy install client-node

ceph-deploy admin client-node

2, create a block device image:

rbd create foo --size 4096

The block device ceph provide mapped to client-node

sudo rbd map foo --pool rbd --name client.admin

3, create a file system

sudo mkfs.ext4 -m0 / dev / rbd / foo

4, mount the file system

sudo mkdir / mnt / test

sudo mount / dev / rbd / foo / mnt / test

cd / mnt / test

Finished! ! ! ! ! ! ! ! ! ! ! !
- Android using SVG vector graphics to create cool animation effects (Programming)
- Linux Basics Tutorial: Combining awk delete data before the specified date hdfs (Linux)
- Shuffle Process Arrangement in MapReduce (Server)
- Linux based serial programming (Programming)
- Xshell configure SSH free password (Server)
- Fast Sort Algorithms (Programming)
- SQL Server memory Misunderstanding (Database)
- mysql_config_editor encryption and decryption of the new features of MySQL realization (Database)
- Using open source software to build XWiki Wiki system installed within the company (Linux)
- How MAT Android application memory leak analysis (Programming)
- How to update the ISPConfig 3 SSL Certificates (Server)
- Linux cd command Detailed (Linux)
- C ++ why we chose to use the smart pointer (Programming)
- Installed FFmpeg 2.6.3 on Ubuntu / Debian / Fedora system (Linux)
- System Security: Build Linux with LIDS steel castle (Linux)
- Build your own CA services: OpenSSL command line CA Quick Guide (Server)
- Linux Getting Started tutorial: Experience KVM Virtual Machine chapter (Linux)
- crontab cause CPU exception Analysis and Processing (Linux)
- IronPython and C # to interact (Programming)
- MySQL database master never solve the synchronization method (Database)
  CopyRight 2002-2016 newfreesoft.com, All Rights Reserved.