Home IT Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ How to configure Ceph stored on CentOS 7.0     - Java by Spy Memcached to cache data (Programming)

- Linux install Maven and SVN client (Linux)

- The most concise explanation of JavaScript closures (Programming)

- 10 easy to use Linux utilities Recommended (Linux)

- Experience CoreCLR stack unwinding characteristics of initial implementation on Linux / Mac (Linux)

- Three details reflect the Unix system security (Linux)

- Preliminary understanding of SELinux security management (Linux)

- JavaScript is implemented without new keywords constructor (Programming)

- Linux Services Comments (Linux)

- Android WebView use layman (Programming)

- Linux compression and decompression command (Linux)

- Talk about Java in the collection (Programming)

- Java and Python use make way dictionary word search script (Programming)

- Python closure and function objects (Programming)

- RHEL5 stalled due to power service error system can not start (Linux)

- CentOS 6.6 compile and install phpMyAdmin configuration PostgreSQL9.4.1 (Database)

- Help you to see Linux system architecture type 5 Common Commands (Linux)

- Ubuntu server 8.04 Firewall Guide (Linux)

- Linux Log (Linux)

- FPM quickly create packages with RPM (Linux)

 
         
  How to configure Ceph stored on CentOS 7.0
     
  Add Date : 2018-11-21      
         
       
         
  Ceph is an open source software platform for distributed data stored on a single computer cluster. When you plan to build a cloud, you first need to decide how to implement your storage. Ceph open-source Red Hat is one of the original technology, which object storage system called RADOS based on a set of data representing the API gateway block, file, and object mode. Because of its own open source features, this portable storage platform can be installed and used on public and private clouds. Ceph cluster topology is based backup and distribution of information design, this design provides inherent data integrity. Its design goal is fault-tolerant, by properly configured to run on commodity hardware and some of the more advanced systems.

Ceph can install on any Linux distribution, but in order to run properly, it requires a recent kernel and other latest library. In this guide, we will use the minimum installation of CentOS-7.0.

System Resources

** CEPH-STORAGE **
OS: CentOSLinux7 (Core)
RAM: 1 GB
CPU: 1 CPU
DISK: 20
Network: 45.79.136.163
FQDN: ceph-storage.linoxide.com
** CEPH-NODE **
OS: CentOSLinux7 (Core)
RAM: 1 GB
CPU: 1 CPU
DISK: 20
Network: 45.79.171.138
FQDN: ceph-node.linoxide.com
 

Pre-installation configuration

Before installing the Ceph storage, we have to complete a number of steps on each node. The first thing is to ensure that each node in the network has been configured and can visit each other.

Configuring Hosts

To configure the hosts entry on each node to be like this open the default hosts configuration file (LCTT Annotation: or make the appropriate DNS resolution).

# Vi / etc / hosts
45.79.136.163 ceph-storage ceph-storage.linoxide.com
45.79.171.138 ceph-node ceph-node.linoxide.com
Install VMware Tools

Work environment VMWare virtual environment, it is recommended that you install it open VM tools. You can use the following command to install.

#yum install -y open-vm-tools
Configure the firewall

If you are using a firewall enabled restrictive environment to ensure that the following ports open in your Ceph storage management node and the client node.

You have to open your Admin Calamari node 80,2003, 4505-4506 and port, and allows access to the port through a No. 80 Ceph or Calamari management node to your network clients can access Calamari web user interface.

You can use the following command to start and enable the firewall in CentOS 7 in.

#systemctl start firewalld
#systemctl enable firewalld
Run the following command so open ports mentioned above Admin Calamari node.

# Firewall-cmd --zone = public - add-port = 80 / tcp --permanent
# Firewall-cmd --zone = public - add-port = 2003 / tcp --permanent
# Firewall-cmd --zone = public - add-port = 4505-4506 / tcp --permanent
# Firewall-cmd --reload
In the Ceph Monitor node, you want to allow the following ports in the firewall.

# Firewall-cmd --zone = public - add-port = 6789 / tcp --permanent
Then allow the following list of default ports so that you can monitor the nodes and clients as well as interact with and send data to other OSD.

# Firewall-cmd --zone = public - add-port = 6800-7300 / tcp --permanent
If you work in a non-production environment, we recommend that you disable the firewall and SELinux settings in our test environment, we will disable the firewall and SELinux.

#systemctl stop firewalld
#systemctl disable firewalld
System Upgrade

Upgrade your system now and reboot to make the required changes to take effect.

#yum update
# Shutdown-r 0
 

Ceph User Settings

Now we'll create a separate sudo user for each node installation ceph-deploy tool, and allow the user no password to access each node, because it needs to install the software and configuration files on the Ceph node instead of a password prompt.

Run the following command on the new host ceph-storage has a separate home directory of new users.

[Root @ ceph-storage ~] # useradd-d / home / ceph -m ceph
[Root @ ceph-storage ~] #passwd ceph
Each node in the new user must have sudo access, you can use the following command to display the assigned sudo privileges.

[Root @ ceph-storage ~] #echo "ceph ALL = (root) NOPASSWD: ALL" | sudotee / etc / sudoers.d / ceph
ceph ALL = (root) NOPASSWD: ALL
[Root @ ceph-storage ~] # sudochmod0440 / etc / sudoers.d / ceph
 

Set up SSH keys

Now we will generate ssh keys and to copy the key to each node in the cluster Ceph Ceph management node.

Copy its ssh key to ceph-storage in ceph-node run the following command.

[Root @ ceph-node ~] # ssh-keygen
Generatingpublic / private rsa key pair.
Enterfilein which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (emptyforno passphrase):
Enter same passphrase again:
Your identification has been saved in / root / .ssh / id_rsa.
Yourpublic key has been saved in / root / .ssh / id_rsa.pub.
The key fingerprint is:
5b: *: *: *: *: *: *: *: *: *: c9 root @ ceph-node
The key's randomart image is:
+ - [RSA 2048] ---- +
[Root @ ceph-node ~] # ssh-copy-id ceph @ ceph-storage

SSH key

Configure the number of PID

To configure the PID number of values, we will use the following command to check the default kernel values. By default, a small maximum number of threads 32768.

The value is a bigger number shown below by editing system configuration files.

Change PID value

Configuration management node server

After you configure and verify all network, we now use ceph users to install ceph-deploy. Open the hosts file by checking the entries (LCTT Annotation: You can also use DNS resolution to complete).

# Vim / etc / hosts
ceph-storage 45.79.136.163
ceph-node 45.79.171.138
Run the following command to add to its library.

# Rpm -Uhv http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm

Add Ceph storage warehouse

Or create a new file and updates the Ceph database parameters, do not forget to replace your current Release and version number.

[Root @ ceph-storage ~] # vi / etc / yum.repos.d / ceph.repo
[Ceph-noarch]
name = Ceph noarch packages
baseurl = http: //ceph.com/rpm- {ceph-release} / {distro} / noarch
enabled = 1
gpgcheck = 1
type = rpm-md
gpgkey = https: //ceph.com/git/ p = ceph.git; a = blob_plain; f = keys / release.asc?
After updating your system and install the ceph-deploy package.


Installation ceph-deploy package

We run the following command and ceph-deploy command to update the system and install the latest ceph libraries and other packages.

#yum update -y && yum install ceph-deploy -y
 

Configuring the cluster

Use the following command on the managed node ceph create a directory and enter a new catalog for the collection of all output files and logs.

# Mkdir ~ / ceph-cluster
# Cd ~ / ceph-cluster
# Ceph-deploy new storage

Setting ceph cluster

If successful implementation of the above command, you'll see it a new configuration file.

Ceph is now configured default configuration file, and open it in your parameters affect global public network, add the following two lines using any editor.

#vim ceph.conf
osd pool defaultsize = 1
public network = 45.79.0.0 / 16
 

Installation Ceph

Now we are ready to install and Ceph Ceph in the cluster associated with each node. We use the following command to install Ceph on ceph-storage and ceph-node.

# Ceph-deploy install ceph-node ceph-storage

Installation ceph

Handles all required warehouse and install the required packages will take some time.

When ceph installation process on both nodes are completed, the next step we will create a monitor and collect keys by running the following command on the same node.

# Ceph-deploy mon create-initial

Ceph Initialization Monitor

Setting OSD and OSD daemon

Now we will set up disk storage, first you run the following command lists all available disks.

# Ceph-deploy disk list ceph-storage
Results will be listed in the disk storage node you use, you can use them to create OSD. Let's run the following command, use the name of your disk.

# Ceph-deploy disk zap storage: sda
# Ceph-deploy disk zap storage: sdb
In order to finalize the OSD configuration, run the following command to log disk and data disks.

# Ceph-deploy osd prepare storage: sdb: / dev / sda
# Ceph-deploy osd activate storage: / dev / sdb1: / dev / sda1
You need to run the same command on all nodes, it will erase everything on your disk. After the cluster can be up and running in order, we need to use the following command to copy a different key and configuration files from the ceph management node to all nodes.

# Ceph-deploy admin ceph-node ceph-storage
 

Test Ceph

We will soon complete the Ceph cluster setup, let's run the following command on the managed node checks ceph ceph state of the running.

# Ceph status
# Ceph health
HEALTH_OK
Ceph status if you do not see any error messages, it means that you have successfully installed in CentOS 7 ceph storage cluster.

to sum up

In this article, we learn in detail how to use the two installed CentOS virtual machine settings Ceph storage cluster 7, which can be used as a backup or other local storage for the virtual machine. We hope this article will help you. When you try to install when I remember to share your experience.
     
         
       
         
  More:      
 
- Use PDFBox processing PDF documents (Linux)
- Shell for loop (Programming)
- Bash code injection attacks through a special environment variable (Linux)
- Java Access Control (Programming)
- Redis configuration file interpretation (Database)
- Linux network security backdoor technology and advanced skill practice (Linux)
- Oracle 11R2 Grid Infrastructure execute root.sh script rootcrs.pl execution failed treatment (Database)
- Linux iostat command example explanation (Linux)
- Linux System Getting Started Tutorial: mounted directly in Linux LVM partition (Linux)
- CentOS-based Kickstart automated installation practice (Linux)
- Difference LVS three scheduling modes (Server)
- RMAN parameters of ARCHIVELOG DELETION (Database)
- Ubuntu 14.04 LTS 64-bit install GNS3 1.3.7 (Linux)
- Linux command execution order control and pipeline (Linux)
- Fedora 22 installation and configuration optimization (Linux)
- MySQL Tutorial: Building MySQL Cluster under Linux (Database)
- Reset CentOS / RHEL root account password 7 (Linux)
- Use Vagrant build cross-platform development environment for Python (Server)
- Mac OS X 10.9 build Nginx + MySQL + php-fpm environment (Server)
- Transplant spider to MySQL 5.6 (Database)
     
           
     
  CopyRight 2002-2016 newfreesoft.com, All Rights Reserved.