Home IT Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Under CentOS 7 installation and deployment environment Ceph     - 5 steps to help you become a good Docker contributors (Linux)

- Source code is installed MySQL 5.6.28 (Database)

- CentOS NAT iptables (Linux)

- PostgreSQL transaction model introduction (Database)

- Build Nginx + uWSGI + Flask operating environment under CentOS 6.4 tutorial (Server)

- Linux ban single-user mode to enhance system security (Linux)

- Setting Derby as Linux / Windows running as a service from the start (Server)

- OpenGL ES 3.0 vertex buffer (Programming)

- Alien Magic: RPM and DEB Mutual Convert (Linux)

- Debian users to install FFmpeg 2.2.2 (Linux)

- Linux three ways to set environment variables (Linux)

- Logging information through the web GUI (LogAnalyzer) (Server)

- Install Open vSwitch under CentOS 6.5 (Linux)

- Linux and hardware (Linux)

- MySQL binlog group to submit XA (two-phase commit) (Database)

- Docker manage data (Linux)

- Learning MySQL data types (Database)

- Search Linux commands and files - which, whereis, locate, find (Linux)

- How to configure chroot environment in Ubuntu 14.04 (Linux)

- CentOS7 installed VMware 10 (Linux)

 
         
  Under CentOS 7 installation and deployment environment Ceph
     
  Add Date : 2018-11-21      
         
       
         
  Ceph Profile

eph design goals is to build on inexpensive storage medium having high performance, high scalibility, high available, to provide a unified storage, sub-file storage, block storage, object storage. Recently I read the documentation feel quite interesting, and it has been able to provide block storage to openstack, very fit mainstream.

Ceph Deployment

1, the host is ready

Experimental environment for virtual machines on VMare machine experiments, mainly to have an intuitive understanding of Cph.

Step 1: Preparation 5 hosts

IP address of the host name (Hostname)

192.168.1.110 admin-node (the host for management, follow-up ceph-deploy tools to operate on the host)

192.168.1.111 node1 (SCADA node)

192.168.1.112 node2 (osd.0 node)

192.168.1.113 node3 (osd.1 node)

192.168.1.114 client-node (customer service side, the main use mount it ceph cluster provides storage for testing)

Step two: Modify the admin-node node / etc / hosts file, add about content

192.168.1.111 node1

192.168.1.112 node2

192.168.1.113 node3

192.168.1.114 client-node

Description: ceph-deploy tools are host names to communicate with other nodes. Modify the host name of the command is: hostnamectl set-hostname "new name"

third step:

5 hosts were created to store user ceph :( with root privileges, or have root privileges)

Create a user

sudo adduser -d / home / ceph -m ceph

set password

sudo passwd ceph

Setting Account Permissions

echo "ceph ALL = (root) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/ceph

sudo chomod 0440 /etc/sudoers.d/ceph

Run visudo modify suoders file:

The Defaults requiretty line modified to modify Defaults: ceph! requiretty

If you do not make changes ceph-depoy use ssh command will execute an error

2, the configuration admin-node with other nodes without password ssh root access to other nodes.

The first step: Run the admin-node host:

ssh-keygen

:( To illustrate a simple point command is executed directly be determined)

Step two: Copy the key first step to another node

ssh-copy-id ceph @ node1

ssh-copy-id ceph @ node2

ssh-copy-id ceph @ node3

ssh-copy-id ceph @ client-node

Meanwhile modify ~ / .ssh / config file to add about the content:

Host node1

Hostname 192.168.1.111

User ceph

Host node2

Hostname 192.168.1.112

User ceph

Host node3

Hostname 192.168.1.113

User ceph

Host client-node

Hostname 192.168.1.114

User ceph

3, installation ceph-deploy for the admin-node node

The first step: increasing the yum configuration file

sudo vim /etc/yum.repos.d/ceph.repo

Add the following:

[Ceph-noarch]

 name = Ceph noarch packages

 baseurl = http: //ceph.com/rpm-firefly/el7/noarch

 enabled = 1

 gpgcheck = 1

 type = rpm-md

 gpgkey = https: //ceph.com/git/ p = ceph.git; a = blob_plain; f = keys / release.asc?

Step two: Update the software source and follow the ceph-deploy, time synchronization software

sudo yum update && sudo yum install ceph-deploy

sudo yum install ntp ntpupdate ntp-doc

4. Shut down all nodes of firewalls and security options (executed on all nodes) and a number of other steps

sudo systemctl stop firewall.service

sudo setenforce 0

sudo yum install yum-plugin-priorities

Summary: After the steps above preconditions are ready to deploy ceph the next real.

5 to ceph user created earlier to create a directory on the admin-node node

mkdir my-cluster

cd my-cluster

6. Create the cluster

Showing the relationship between the nodes: node1 as the monitoring node, node2, node3 as osd node, admin-node as a management node

Step: Execute the following command to create a cluster node node1 to monitor.

ceph-deploy new node1

After executing this command in the current directory production ceph.conf file, open the file and adds about content:

osd pool default size = 2

Step Two: Using ceph-deploy node installation ceph

ceph install admin-node node1 node2 node3

The third step: initialize the monitor node and collect keyring:

ceph-deploy mon create-initial

6, allocate disk space for the storage node osd process:

ssh node2

sudo mkdir / var / local / osd0

exit

ssh node3

sudo mkdir / var / local / osd1

exit

Then open the other nodes osd process through ceph-deploy admin-node node, and activation.

ceph-deploy osd prepare node2: / var / local / osd0 node3: / var / local / osd1

ceph-deploy osd active node2: / var / local / osd0 node3: / var / local / osd1

Admin-node configuration file node to other nodes synchronize with keyring:

ceph-deploy admin admin-node node1 node2 node3

sudo chmod + r /etc/ceph/ceph.client.admin.keyring

Finally, the command to view the health status of the cluster:

ceph health

If successful you will be prompted: HEALTH_OK

Ceph storage space usage:

1, to prepare client-node

Run by admin-node node:

ceph-deploy install client-node

ceph-deploy admin client-node

2, create a block device image:

rbd create foo --size 4096

The block device ceph provide mapped to client-node

sudo rbd map foo --pool rbd --name client.admin

3, create a file system

sudo mkfs.ext4 -m0 / dev / rbd / foo

4, mount the file system

sudo mkdir / mnt / test

sudo mount / dev / rbd / foo / mnt / test

cd / mnt / test

Finished! ! ! ! ! ! ! ! ! ! ! !
     
         
       
         
  More:      
 
- Ubuntu install virtual machine software VirtualBox 4.3.10 (Linux)
- CentOS and RHEL installation under GAMIT10.6 (Linux)
- How to experience Unity 8 Mir on Ubuntu 16.04 (Linux)
- Ceph single / multi-node installation summary Powered by CentOS 6.x (Server)
- CentOS and RHEL to install IPython 0.11 (Linux)
- Linux environment variable settings methods and differences (Linux)
- Ubuntu 14.04 running ASP.NET Configuration Mono + Jexus (Server)
- Linux Network Programming - raw socket instance: MAC header message analysis (Programming)
- The Sublime Text 3 configuration file (Linux)
- Linux installation and configuration curl command tool (Linux)
- Java string intern constant pool resolution Introduction (Programming)
- Introduces Linux kernel compilation system and compiler installation (Linux)
- Linux System Getting Started Learning: Using yum to download the RPM package without installing (Linux)
- ActionContext and ServletActionContext Summary (Programming)
- Use Bash script write CVS version control (Server)
- Linux can modify the maximum number of open files (Linux)
- Oracle utilized undo data recovery operations (Database)
- MariaDB database storage path modify configuration issues (Database)
- Ubuntu 15.10 installation and deployment Swift development environment (Linux)
- Transfer MySQL database to MariaDB (Database)
     
           
     
  CopyRight 2002-2016 newfreesoft.com, All Rights Reserved.