Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Under CentOS 7 installation and deployment environment Ceph     - Moosefs Distributed File System Configuration (Server)

- Why you should choose Python Programming (Programming)

- Fedora 20 Installation and Configuration (Linux)

- Some MySQL interview questions (Database)

- DRBD daily management (Server)

- Linux operating tips: Can not open file for writing or operation not permitted solution (Linux)

- CentOS iptables firewall enabled (Linux)

- Linux configuration startup mount: fstab file (Linux)

- Hadoop namenode do NFS disaster recovery (Server)

- Several start-up mode of Tomcat (Server)

- Will Laravel become the most successful PHP framework? (Programming)

- Linux --- manual release system cache (Linux)

- Source compiler install Nginx (Server)

- Oracle 11G R2 DataGuard structures (Database)

- HBase cluster installation and deployment (Server)

- Run two MySQL service on one server (Database)

- Linux System Getting Started Learning: rename multiple files in Linux (Linux)

- Linux common network tools: batch scanning of nmap hosting service (Linux)

- Linux command in the dialog whiptail (Linux)

- Linux iptables: basic principles and rules (Linux)

 
         
  Under CentOS 7 installation and deployment environment Ceph
     
  Add Date : 2018-11-21      
         
         
         
  Ceph Profile

eph design goals is to build on inexpensive storage medium having high performance, high scalibility, high available, to provide a unified storage, sub-file storage, block storage, object storage. Recently I read the documentation feel quite interesting, and it has been able to provide block storage to openstack, very fit mainstream.

Ceph Deployment

1, the host is ready

Experimental environment for virtual machines on VMare machine experiments, mainly to have an intuitive understanding of Cph.

Step 1: Preparation 5 hosts

IP address of the host name (Hostname)

192.168.1.110 admin-node (the host for management, follow-up ceph-deploy tools to operate on the host)

192.168.1.111 node1 (SCADA node)

192.168.1.112 node2 (osd.0 node)

192.168.1.113 node3 (osd.1 node)

192.168.1.114 client-node (customer service side, the main use mount it ceph cluster provides storage for testing)

Step two: Modify the admin-node node / etc / hosts file, add about content

192.168.1.111 node1

192.168.1.112 node2

192.168.1.113 node3

192.168.1.114 client-node

Description: ceph-deploy tools are host names to communicate with other nodes. Modify the host name of the command is: hostnamectl set-hostname "new name"

third step:

5 hosts were created to store user ceph :( with root privileges, or have root privileges)

Create a user

sudo adduser -d / home / ceph -m ceph

set password

sudo passwd ceph

Setting Account Permissions

echo "ceph ALL = (root) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/ceph

sudo chomod 0440 /etc/sudoers.d/ceph

Run visudo modify suoders file:

The Defaults requiretty line modified to modify Defaults: ceph! requiretty

If you do not make changes ceph-depoy use ssh command will execute an error

2, the configuration admin-node with other nodes without password ssh root access to other nodes.

The first step: Run the admin-node host:

ssh-keygen

:( To illustrate a simple point command is executed directly be determined)

Step two: Copy the key first step to another node

ssh-copy-id ceph @ node1

ssh-copy-id ceph @ node2

ssh-copy-id ceph @ node3

ssh-copy-id ceph @ client-node

Meanwhile modify ~ / .ssh / config file to add about the content:

Host node1

Hostname 192.168.1.111

User ceph

Host node2

Hostname 192.168.1.112

User ceph

Host node3

Hostname 192.168.1.113

User ceph

Host client-node

Hostname 192.168.1.114

User ceph

3, installation ceph-deploy for the admin-node node

The first step: increasing the yum configuration file

sudo vim /etc/yum.repos.d/ceph.repo

Add the following:

[Ceph-noarch]

 name = Ceph noarch packages

 baseurl = http: //ceph.com/rpm-firefly/el7/noarch

 enabled = 1

 gpgcheck = 1

 type = rpm-md

 gpgkey = https: //ceph.com/git/ p = ceph.git; a = blob_plain; f = keys / release.asc?

Step two: Update the software source and follow the ceph-deploy, time synchronization software

sudo yum update && sudo yum install ceph-deploy

sudo yum install ntp ntpupdate ntp-doc

4. Shut down all nodes of firewalls and security options (executed on all nodes) and a number of other steps

sudo systemctl stop firewall.service

sudo setenforce 0

sudo yum install yum-plugin-priorities

Summary: After the steps above preconditions are ready to deploy ceph the next real.

5 to ceph user created earlier to create a directory on the admin-node node

mkdir my-cluster

cd my-cluster

6. Create the cluster

Showing the relationship between the nodes: node1 as the monitoring node, node2, node3 as osd node, admin-node as a management node

Step: Execute the following command to create a cluster node node1 to monitor.

ceph-deploy new node1

After executing this command in the current directory production ceph.conf file, open the file and adds about content:

osd pool default size = 2

Step Two: Using ceph-deploy node installation ceph

ceph install admin-node node1 node2 node3

The third step: initialize the monitor node and collect keyring:

ceph-deploy mon create-initial

6, allocate disk space for the storage node osd process:

ssh node2

sudo mkdir / var / local / osd0

exit

ssh node3

sudo mkdir / var / local / osd1

exit

Then open the other nodes osd process through ceph-deploy admin-node node, and activation.

ceph-deploy osd prepare node2: / var / local / osd0 node3: / var / local / osd1

ceph-deploy osd active node2: / var / local / osd0 node3: / var / local / osd1

Admin-node configuration file node to other nodes synchronize with keyring:

ceph-deploy admin admin-node node1 node2 node3

sudo chmod + r /etc/ceph/ceph.client.admin.keyring

Finally, the command to view the health status of the cluster:

ceph health

If successful you will be prompted: HEALTH_OK

Ceph storage space usage:

1, to prepare client-node

Run by admin-node node:

ceph-deploy install client-node

ceph-deploy admin client-node

2, create a block device image:

rbd create foo --size 4096

The block device ceph provide mapped to client-node

sudo rbd map foo --pool rbd --name client.admin

3, create a file system

sudo mkfs.ext4 -m0 / dev / rbd / foo

4, mount the file system

sudo mkdir / mnt / test

sudo mount / dev / rbd / foo / mnt / test

cd / mnt / test

Finished! ! ! ! ! ! ! ! ! ! ! !
     
         
         
         
  More:      
 
- Java learning problems encountered (Programming)
- Laravel 4.2 Laravel5 comprehensive upgrade Raiders (Server)
- Java reflection technology explain (Programming)
- Linux Getting Started tutorial: How to backup Linux systems (Linux)
- MYSQL root password for the database user passwords are weak attack analysis (Linux)
- Examples 14 grep command (Linux)
- 10 useful Linux command Interview Questions and Answers (Linux)
- Linux set to select the appropriate level of security of the network according to deployment (Linux)
- Five strokes to find out the IP address you want to know (Linux)
- CentOS iptables firewall enabled (Linux)
- Virtualization and IT cooperation (Linux)
- Get basic information about Linux server script (Server)
- Vim custom color (Linux)
- CentOS 7 repair MBR and GRUB (Linux)
- OGG-01496 OGG-01031 Error Resolution (Database)
- Fragment Android developers learning to resolve (Programming)
- Linux script commands - terminal recorder (Linux)
- How to create a new file system / partitions under Linux terminal (Linux)
- ImageMagick Tutorial: How to cut images in Linux command line (Linux)
- How the program is executed (Programming)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.