Home IT Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ How to configure Ceph stored on CentOS 7.0     - The strings in Python reside (Programming)

- Ubuntu and Derivative Edition users install LMMS 0.4.15 (Linux)

- Oracle DataGuard principles and basic configuration (Database)

- Single Instance ASM under CRS-4124, CRS-4000 error handling (Database)

- Partition and file system under Linux (Linux)

- Linux itself disguised illusion strengthen security (Linux)

- Install and use automated tools Ansible in CentOS 7 (Linux)

- Use window.name + iframe cross-domain access to data Detailed (Programming)

- CentOS of NFS (Server)

- Git Experience Sharing - Using a remote repository (Linux)

- CentOS 7 How to connect to a wireless network (Linux)

- Some problems and countermeasures Linux system calls exist (Linux)

- DM9000 bare Driver Design (Programming)

- Custom Android UI template Comments (Programming)

- SSH security note (Linux)

- Java thread lifecycle (Programming)

- Manually generate AWR reports (Database)

- Sorting Algorithm (1) Quick Sort C ++ implementation (Programming)

- Python2 ---- function using dictionaries (Programming)

- Oracle Shared Server Configuration (Database)

 
         
  How to configure Ceph stored on CentOS 7.0
     
  Add Date : 2018-11-21      
         
       
         
  Ceph is an open source software platform for distributed data stored on a single computer cluster. When you plan to build a cloud, you first need to decide how to implement your storage. Ceph open-source Red Hat is one of the original technology, which object storage system called RADOS based on a set of data representing the API gateway block, file, and object mode. Because of its own open source features, this portable storage platform can be installed and used on public and private clouds. Ceph cluster topology is based backup and distribution of information design, this design provides inherent data integrity. Its design goal is fault-tolerant, by properly configured to run on commodity hardware and some of the more advanced systems.

Ceph can install on any Linux distribution, but in order to run properly, it requires a recent kernel and other latest library. In this guide, we will use the minimum installation of CentOS-7.0.

System Resources

** CEPH-STORAGE **
OS: CentOSLinux7 (Core)
RAM: 1 GB
CPU: 1 CPU
DISK: 20
Network: 45.79.136.163
FQDN: ceph-storage.linoxide.com
** CEPH-NODE **
OS: CentOSLinux7 (Core)
RAM: 1 GB
CPU: 1 CPU
DISK: 20
Network: 45.79.171.138
FQDN: ceph-node.linoxide.com
 

Pre-installation configuration

Before installing the Ceph storage, we have to complete a number of steps on each node. The first thing is to ensure that each node in the network has been configured and can visit each other.

Configuring Hosts

To configure the hosts entry on each node to be like this open the default hosts configuration file (LCTT Annotation: or make the appropriate DNS resolution).

# Vi / etc / hosts
45.79.136.163 ceph-storage ceph-storage.linoxide.com
45.79.171.138 ceph-node ceph-node.linoxide.com
Install VMware Tools

Work environment VMWare virtual environment, it is recommended that you install it open VM tools. You can use the following command to install.

#yum install -y open-vm-tools
Configure the firewall

If you are using a firewall enabled restrictive environment to ensure that the following ports open in your Ceph storage management node and the client node.

You have to open your Admin Calamari node 80,2003, 4505-4506 and port, and allows access to the port through a No. 80 Ceph or Calamari management node to your network clients can access Calamari web user interface.

You can use the following command to start and enable the firewall in CentOS 7 in.

#systemctl start firewalld
#systemctl enable firewalld
Run the following command so open ports mentioned above Admin Calamari node.

# Firewall-cmd --zone = public - add-port = 80 / tcp --permanent
# Firewall-cmd --zone = public - add-port = 2003 / tcp --permanent
# Firewall-cmd --zone = public - add-port = 4505-4506 / tcp --permanent
# Firewall-cmd --reload
In the Ceph Monitor node, you want to allow the following ports in the firewall.

# Firewall-cmd --zone = public - add-port = 6789 / tcp --permanent
Then allow the following list of default ports so that you can monitor the nodes and clients as well as interact with and send data to other OSD.

# Firewall-cmd --zone = public - add-port = 6800-7300 / tcp --permanent
If you work in a non-production environment, we recommend that you disable the firewall and SELinux settings in our test environment, we will disable the firewall and SELinux.

#systemctl stop firewalld
#systemctl disable firewalld
System Upgrade

Upgrade your system now and reboot to make the required changes to take effect.

#yum update
# Shutdown-r 0
 

Ceph User Settings

Now we'll create a separate sudo user for each node installation ceph-deploy tool, and allow the user no password to access each node, because it needs to install the software and configuration files on the Ceph node instead of a password prompt.

Run the following command on the new host ceph-storage has a separate home directory of new users.

[Root @ ceph-storage ~] # useradd-d / home / ceph -m ceph
[Root @ ceph-storage ~] #passwd ceph
Each node in the new user must have sudo access, you can use the following command to display the assigned sudo privileges.

[Root @ ceph-storage ~] #echo "ceph ALL = (root) NOPASSWD: ALL" | sudotee / etc / sudoers.d / ceph
ceph ALL = (root) NOPASSWD: ALL
[Root @ ceph-storage ~] # sudochmod0440 / etc / sudoers.d / ceph
 

Set up SSH keys

Now we will generate ssh keys and to copy the key to each node in the cluster Ceph Ceph management node.

Copy its ssh key to ceph-storage in ceph-node run the following command.

[Root @ ceph-node ~] # ssh-keygen
Generatingpublic / private rsa key pair.
Enterfilein which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (emptyforno passphrase):
Enter same passphrase again:
Your identification has been saved in / root / .ssh / id_rsa.
Yourpublic key has been saved in / root / .ssh / id_rsa.pub.
The key fingerprint is:
5b: *: *: *: *: *: *: *: *: *: c9 root @ ceph-node
The key's randomart image is:
+ - [RSA 2048] ---- +
[Root @ ceph-node ~] # ssh-copy-id ceph @ ceph-storage

SSH key

Configure the number of PID

To configure the PID number of values, we will use the following command to check the default kernel values. By default, a small maximum number of threads 32768.

The value is a bigger number shown below by editing system configuration files.

Change PID value

Configuration management node server

After you configure and verify all network, we now use ceph users to install ceph-deploy. Open the hosts file by checking the entries (LCTT Annotation: You can also use DNS resolution to complete).

# Vim / etc / hosts
ceph-storage 45.79.136.163
ceph-node 45.79.171.138
Run the following command to add to its library.

# Rpm -Uhv http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm

Add Ceph storage warehouse

Or create a new file and updates the Ceph database parameters, do not forget to replace your current Release and version number.

[Root @ ceph-storage ~] # vi / etc / yum.repos.d / ceph.repo
[Ceph-noarch]
name = Ceph noarch packages
baseurl = http: //ceph.com/rpm- {ceph-release} / {distro} / noarch
enabled = 1
gpgcheck = 1
type = rpm-md
gpgkey = https: //ceph.com/git/ p = ceph.git; a = blob_plain; f = keys / release.asc?
After updating your system and install the ceph-deploy package.


Installation ceph-deploy package

We run the following command and ceph-deploy command to update the system and install the latest ceph libraries and other packages.

#yum update -y && yum install ceph-deploy -y
 

Configuring the cluster

Use the following command on the managed node ceph create a directory and enter a new catalog for the collection of all output files and logs.

# Mkdir ~ / ceph-cluster
# Cd ~ / ceph-cluster
# Ceph-deploy new storage

Setting ceph cluster

If successful implementation of the above command, you'll see it a new configuration file.

Ceph is now configured default configuration file, and open it in your parameters affect global public network, add the following two lines using any editor.

#vim ceph.conf
osd pool defaultsize = 1
public network = 45.79.0.0 / 16
 

Installation Ceph

Now we are ready to install and Ceph Ceph in the cluster associated with each node. We use the following command to install Ceph on ceph-storage and ceph-node.

# Ceph-deploy install ceph-node ceph-storage

Installation ceph

Handles all required warehouse and install the required packages will take some time.

When ceph installation process on both nodes are completed, the next step we will create a monitor and collect keys by running the following command on the same node.

# Ceph-deploy mon create-initial

Ceph Initialization Monitor

Setting OSD and OSD daemon

Now we will set up disk storage, first you run the following command lists all available disks.

# Ceph-deploy disk list ceph-storage
Results will be listed in the disk storage node you use, you can use them to create OSD. Let's run the following command, use the name of your disk.

# Ceph-deploy disk zap storage: sda
# Ceph-deploy disk zap storage: sdb
In order to finalize the OSD configuration, run the following command to log disk and data disks.

# Ceph-deploy osd prepare storage: sdb: / dev / sda
# Ceph-deploy osd activate storage: / dev / sdb1: / dev / sda1
You need to run the same command on all nodes, it will erase everything on your disk. After the cluster can be up and running in order, we need to use the following command to copy a different key and configuration files from the ceph management node to all nodes.

# Ceph-deploy admin ceph-node ceph-storage
 

Test Ceph

We will soon complete the Ceph cluster setup, let's run the following command on the managed node checks ceph ceph state of the running.

# Ceph status
# Ceph health
HEALTH_OK
Ceph status if you do not see any error messages, it means that you have successfully installed in CentOS 7 ceph storage cluster.

to sum up

In this article, we learn in detail how to use the two installed CentOS virtual machine settings Ceph storage cluster 7, which can be used as a backup or other local storage for the virtual machine. We hope this article will help you. When you try to install when I remember to share your experience.
     
         
       
         
  More:      
 
- CentOS 6.6 compile and install phpMyAdmin configuration PostgreSQL9.4.1 (Database)
- C ++ Fundamentals study notes (Programming)
- Python programmers most often committed ten errors (Programming)
- Linux use additional rights (Linux)
- Create several practical points of high security PHP site (Linux)
- Analyzing Linux server architecture is 32-bit / 64-bit (Server)
- To use slay kill user processes (Linux)
- Java Concurrency: synchronized (Programming)
- Achieve camera preview by ffplay (Linux)
- Spring JDBC Comments (Programming)
- Linux disk virtualization (Linux)
- Clojure programming languages: take full advantage of the Clojure plug-in Eclipse (Programming)
- independently configurable PHP environment under CentOS6.5 (Server)
- How to network to share files between Windows, MAC and Linux (Linux)
- [SHELL] MySQL primary recovery solution from + Keepalived online (Server)
- How do I switch from NetworkManager to systemd-network on Linux (Linux)
- Using Oracle for Oracle GoldenGate to achieve a one-way data synchronization (Database)
- Using Java program determines whether it is a leap year (Programming)
- Linux, modify the fstab file system can not start causing solve one case (Linux)
- Ubuntu 14.04 LTS next upgrade gcc to gcc-4.9, gcc-5 version (Linux)
     
           
     
  CopyRight 2002-2016 newfreesoft.com, All Rights Reserved.