Home IT Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Based kubernetes Construction Docker Cluster Management Comments     - LVM management reduces swap partition space to the root partition (Linux)

- Build your own Web server under Ubuntu Linux system (Server)

- Linux redirection and piping (Linux)

- Oracle Data Guard LOG_ARCHIVE_DEST_n optional attribute parameters (Database)

- Linux System Tutorial: Fix ImportError: No module named wxversion error (Linux)

- The best tools and techniques to find data on a Unix system (Linux)

- pureftpd basis: Install, configure, implement, anonymous logon (Linux)

- Linux /var/spool/ insufficient clientmqueue space solutions (Linux)

- Java, on the dfile.encoding Systemproperty (Programming)

- Linux user login ban (Linux)

- Chrome plug-in management, online-offline installation, part of the plug presentations (Linux)

- NFS installation process under the CentOS (Linux)

- Linux atomic operations and synchronization mechanisms (Programming)

- Ubuntu amend resolv.conf restart failure problem (Linux)

- JavaScript prototype and the prototype chain (Programming)

- MySQL in order by inaccurate results in problems and solutions (Database)

- Ubuntu and derivatives installation Atom 0.104.0 (Linux)

- Timing Nginx logs cut and remove the log records of the specified number of days before (Server)

- Some of the bibliographic management tools to good use on linux (Linux)

- Linux POST fstab configuration file read-only variable can not be modified problem (Linux)

 
         
  Based kubernetes Construction Docker Cluster Management Comments
     
  Add Date : 2016-10-12      
         
       
         
  Kubernetes Google is open source container cluster management system, based on Docker container to build a scheduling service, provide resource scheduling, balancing disaster recovery, service registration, content and other dynamic scaling feature set, the latest version is 0.6.2. This article describes how to build Kubernetes based CentOS7.0 internet.

Before the formal presentation, we need to first understand the core concepts and bear Kubernetes function.

1. Pods

In Kubernetes system, the smallest particles of scheduling is not a simple container, but abstracted into a Pod, Pod is a can be created, destroyed, scheduling, management smallest deployment unit. For example, a group or a container.

2. Replication Controllers

Replication Controller Kubernetes system is the most useful features, implement copy multiple copies Pod, an application often requires multiple Pod to support, and to ensure that the number of copies can be copied, even if a copy of the scheduling assignment abnormal host, through Replication Controller can guarantee to enable the same amount of Pod in other places in the main machine. Replication Controller repcon template can be created by a plurality of Pod copies can also copy an existing Pod, needs associated by Label selector.

3. Services

Services are Kubernetes most peripheral units through a virtual IP and service port access, you can visit our defined Pod resources, the current version through the iptables nat forwarding to achieve, forwarding destination port Kube_proxy generated random port, currently only provides access schedule GOOGLE cloud, such as GCE. If our self-built platform to integrate? Please pay attention next "kubernetes HECD architecture and integration," the article.

4. Labels

Labels are used to distinguish Pod, Service, Replication Controller of key / value key-value pairs, use only in relation to identify Pod, Service, Replication Controller between, but when they have to operate the unit itself using the name tag.

5. Proxy

Proxy service not only to solve the same problem in the same host port conflict sink unit, also provides the ability to Service port forwarding service external service provider, Proxy backend uses a random, round robin load balancing algorithm.

Talk a little personal opinion, the current Kubernetes stand for one week a small version of a month a large version of the rhythm, fast iteration, it also brings the differences between different versions of the operating method, in addition to the official website of the document update rate is lagging behind and lack to beginners bring some challenges. In the upstream access layer official also focus on the GCE (Google Compute Engine) docking optimized for personal private cloud has not yet introduced a viable access solutions. In v0.5 version only reference mechanism forwarding agent service, and is achieved by iptables in performance under high concurrency worrying. But the author is still optimistic about the future development Kubernetes, at least not yet see another into the system, have a good platform ecosystem, I believe in V1.0 when it will have the ability to support the service production environment.

First, the deployment environment

1. Platform Release Notes


Centos7.0 OS
Kubernetes V0.6.2
etcd version 0.4.6
Docker version 1.3.2

2. Description platform environment

3. Installation Environment

1) system initialization (all hosts)

System installation - Select [minimum installation]

# Yum -y install wget ntpdate bind-utils
# Wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/epel-release-7-2.noarch.rpm
# Yum update
 

CentOS 7.0 is the default firewall as a firewall, iptables firewall to here (familiarity higher, non-essential).

1.1 Close firewall: 1.1, turn off firewall:

# Systemctl stop firewalld.service # Stop firewall
# Systemctl disable firewalld.service # prohibit firewall boot
 

1.2 install iptables firewall

# Yum install iptables-services # installation
# Systemctl start iptables.service # final restart firewall configuration to take effect
# Systemctl enable iptables.service # firewall boot
2) Install Etcd (192.168.1.10 host)

# Mkdir -p / home / install && cd / home / install
# Wget https://github.com/coreos/etcd/releases/download/v0.4.6/etcd-v0.4.6-linux-amd64.tar.gz
# Tar -zxvf etcd-v0.4.6-linux-amd64.tar.gz
# Cd etcd-v0.4.6-linux-amd64
# Cp etcd * / bin /
# / Bin / etcd -version
etcd version 0.4.6
Start service etcd services, if third-party management needs, the need to add another "-cors = '*'" parameter in the startup parameters.

# Mkdir / data / etcd
# / Bin / etcd -name etcdserver -peer-addr 192.168.1.10:7001 -addr 192.168.1.10:4001 -data-dir / data / etcd -peer-bind-addr 0.0.0.0:7001 -bind-addr 0.0.0.0 : 4001 &
 

Firewall configuration etcd service, which is service port 4001, 7001 cluster data exchange port.

# Iptables -I INPUT -s 192.168.1.0/24 -p tcp --dport 4001 -j ACCEPT
# Iptables -I INPUT -s 192.168.1.0/24 -p tcp --dport 7001 -j ACCEPT
 

3) Install Kubernetes (involving all Master, Minion host)

Yum source by way of installation, the default will be installed etcd, docker, and cadvisor related packages.

# Curl https://copr.fedoraproject.org/coprs/eparis/kubernetes-epel-7/repo/epel-7/eparis-kubernetes-epel-7-epel-7.repo -o /etc/yum.repos. d / eparis-kubernetes-epel-7-epel-7.repo
#yum -y install kubernetes
 

Upgrade to v0.6.2, to cover the bin file, as follows:

# Mkdir -p / home / install && cd / home / install
# Wget https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.6.2/kubernetes.tar.gz
# Tar -zxvf kubernetes.tar.gz
# Tar -zxvf kubernetes / server / kubernetes-server-linux-amd64.tar.gz
# Cp kubernetes / server / bin / kube * / usr / bin
 

Check the installation results, published the following information install properly.

[Root @ SN2014-12-200 bin] # / usr / bin / kubectl version
Client Version: version.Info {Major: "0", Minor: "6+", GitVersion: "v0.6.2", GitCommit: "729fde276613eedcd99ecf5b93f095b8deb64eb4", GitTreeState: "clean"}
Server Version: & version.Info {Major: "0", Minor: "6+", GitVersion: "v0.6.2", GitCommit: "729fde276613eedcd99ecf5b93f095b8deb64eb4", GitTreeState: "clean"}
 

4) Kubernetes configuration (Master unit only)

master running three components, including apiserver, scheduler, controller-manager, configuration item only relates to these three.

4.1 [/ etc / kubernetes / config]

# Comma seperated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS = "--etcd_servers = http: //192.168.1.10: 4001"
# Logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR = "- logtostderr = true"
# Journal message level, 0 is debug
KUBE_LOG_LEVEL = "- v = 0"
# Should this cluster be allowed to run privleged docker containers
KUBE_ALLOW_PRIV = "- allow_privileged = false"
 

4.2 [/ etc / kubernetes / apiserver]

# The address on the local server to listen to.
KUBE_API_ADDRESS = "- address = 0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT = "- port = 8080"
# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER = "- master = 192.168.1.200: 8080"
# Port minions listen on
KUBELET_PORT = "- kubelet_port = 10250"
# Address range to use for services
KUBE_SERVICE_ADDRESSES = "- portal_net = 10.254.0.0 / 16"
# Add you own!
KUBE_API_ARGS = ""

4.3 [/ etc / kubernetes / controller-manager]

# Comma seperated list of minions
KUBELET_ADDRESSES = "- machines = 192.168.1.201,192.168.1.202"
# Add you own!
KUBE_CONTROLLER_MANAGER_ARGS = ""
 

4.4 [/ etc / kubernetes / scheduler]

# Add your own!
KUBE_SCHEDULER_ARGS = ""
 
Start master side related services

# Systemctl daemon-reload
# Systemctl start kube-apiserver.service kube-controller-manager.service kube-scheduler.service
# Systemctl enable kube-apiserver.service kube-controller-manager.service kube-scheduler.service
 

5) Kubernetes configuration (minion master only)

minion run two components, including kubelet, proxy, configuration items also involve these two.

Docker startup script updates

# Vi / etc / sysconfig / docker
Add: -H tcp: //0.0.0.0: 2375, the final configuration is as follows, in order to provide remote API after maintenance.

OPTIONS = - selinux-enabled -H tcp: //0.0.0.0: 2375 -H fd: //
Minion modify the firewall configuration, usually find minion master host is mostly because the port is not connected.

iptables -I INPUT -s 192.168.1.200 -p tcp --dport 10250 -j ACCEPT
Modify kubernetes minion end configuration to host 192.168.1.201 as an example, other minion master empathy.

5.1 [/ etc / kubernetes / config]

# Comma seperated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS = "--etcd_servers = http: //192.168.1.10: 4001"
# Logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR = "- logtostderr = true"
# Journal message level, 0 is debug
KUBE_LOG_LEVEL = "- v = 0"
# Should this cluster be allowed to run privleged docker containers
KUBE_ALLOW_PRIV = "- allow_privileged = false"
 

5.2 [/ etc / kubernetes / kubelet]

###
# Kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS = "- address = 0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT = "- port = 10250"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME = "- hostname_override = 192.168.1.201"
# Add your own!
KUBELET_ARGS = ""


5.3 [/ etc / kubernetes / proxy]

KUBE_PROXY_ARGS = ""
 

Start kubernetes Service

# Systemctl daemon-reload
# Systemctl enable docker.service kubelet.service kube-proxy.service
# Systemctl start docker.service kubelet.service kube-proxy.service
 

3. Verify the installation (the master host OS, or can access the master host port 8080 client api host)

1) kubernetes common commands

# Kubectl get minions # check to see minion host
# Kubectl get pods # pods View Listing
# Kubectl get services or kubectl get services -o json # View service list
# Kubectl get replicationControllers # View replicationControllers list
# For i in `kubectl get pod | tail -n +2 | awk '{print $ 1}'`; do kubectl delete pod $ i; done # Delete all pods
 

Or through Server api for REST mode (recommended, timeliness higher):

# Curl -s -L http://192.168.1.200:8080/api/v1beta1/version | python -mjson.tool # View kubernetes version
# Curl -s -L http://192.168.1.200:8080/api/v1beta1/pods | python -mjson.tool # pods View Listing
# Curl -s -L http://192.168.1.200:8080/api/v1beta1/replicationControllers | python -mjson.tool # View replicationControllers list
# Curl -s -L http://192.168.1.200:8080/api/v1beta1/minions | python -m json.tool # check to see minion host
# Curl -s -L http://192.168.1.200:8080/api/v1beta1/services | python -m json.tool # View service list
 

NOTE: In the new Kubernetes, all operating commands are integrated to kubectl, including kubecfg, kubectl.sh, kubecfg.sh etc.

2) Create a test pod unit

# / Home / kubermange / pods && cd / home / kubermange / pods

# Vi apache-pod.json

{
  "Id": "Fedoraapache",
  "Kind": "Pod",
  "ApiVersion": "v1beta1",
  "DesiredState": {
    "Manifest": {
      "Version": "v1beta1",
      "Id": "fedoraapache",
      "Containers": [{
        "Name": "fedoraapache",
        "Image": "fedora / apache",
        "Ports": [{
          "ContainerPort": 80,
          "HostPort": 8080
        }]
      }]
    }
  },
  "Labels": {
    "Name": "fedoraapache"
  }
}
 

# Kubectl create -f apache-pod.json

# Kubectl get pod

NAME IMAGE (S) HOST LABELS STATUS
fedoraapache fedora / apache 192.168.1.202/ name = fedoraapache Running
 

Launch a browser to http://192.168.1.202:8080/, remember the corresponding service port in iptables has been added.

Observation data storage structure of the individual pods stored in json format.

Second, combat operations

Task: Create a LNMP by Kubernetes service cluster architecture, and observe load balancing, mirroring involves "yorko / webserver" has to push registry.hub.docker.com, we can through the "docker pull yorko / webserver" download.

# Mkdir -p / home / kubermange / replication && mkdir -p / home / kubermange / service
# Cd / home / kubermange / replication
 

1. Create a replication, this example creates pod directly in the replication and copy the template can also be created independently and then to copy the pod through replication.

[Replication / lnmp-replication.json]

{
  "Id": "webserverController",
  "Kind": "ReplicationController",
  "ApiVersion": "v1beta1",
  "Labels": { "name": "webserver"},
  "DesiredState": {
    "Replicas": 2,
    "ReplicaSelector": { "name": "webserver_pod"},
    "PodTemplate": {
      "DesiredState": {
         "Manifest": {
           "Version": "v1beta1",
           "Id": "webserver",
           "Volumes": [
             { "Name": "httpconf", "source": { "hostDir": { "path": "/ etc / httpd / conf"}}},
             { "Name": "httpconfd", "source": { "hostDir": { "path": "/ etc / httpd / conf.d"}}},
             { "Name": "httproot", "source": { "hostDir": { "path": "/ data"}}}
            ],
           "Containers": [{
             "Name": "webserver",
             "Image": "yorko / webserver",
             "Command": [ "/ bin / sh", "-c", "/ usr / bin / supervisord -c /etc/supervisord.conf"],
             "VolumeMounts": [
               { "Name": "httpconf", "mountPath": "/ etc / httpd / conf"},
               { "Name": "httpconfd", "mountPath": "/ etc / httpd / conf.d"},
               { "Name": "httproot", "mountPath": "/ data"}
              ],
             "Cpu": 100,
             "Memory": 50000000,
             "Ports": [{
               "ContainerPort": 80,
             }, {
               "ContainerPort": 22,
            }]
           }]
         }
       },
       "Labels": { "name": "webserver_pod"},
      },
  }
}
 

Create command execution

#kubectl create -f lnmp-replication.json

Watch List pod replica generated:

[Root @ SN2014-12-200 replication] # kubectl get pod

NAME IMAGE (S) HOST LABELS STATUS
84150ab7-89f8-11e4-970d-000c292f1620 yorko / webserver 192.168.1.202/ name = webserver_pod Running
84154ed5-89f8-11e4-970d-000c292f1620 yorko / webserver 192.168.1.201/ name = webserver_pod Running
840beb1b-89f8-11e4-970d-000c292f1620 yorko / webserver 192.168.1.202/ name = webserver_pod Running
84152d93-89f8-11e4-970d-000c292f1620 yorko / webserver 192.168.1.202/ name = webserver_pod Running
840db120-89f8-11e4-970d-000c292f1620 yorko / webserver 192.168.1.201/ name = webserver_pod Running
8413b4f3-89f8-11e4-970d-000c292f1620 yorko / webserver 192.168.1.201/ name = webserver_pod Running
 

2. Create a service, specify "name" by selector: "webserver_pod" associated with the pods.

[Service / lnmp-service.json]

{
  "Id": "webserver",
  "Kind": "Service",
  "ApiVersion": "v1beta1",
  "Selector": {
    "Name": "webserver_pod",
  },
  "Protocol": "TCP",
  "ContainerPort": 80,
  "Port": 8080
}
 

Create execute the command:

# Kubectl create -f lnmp-service.json

Login minion host (192.168.1.201), places the main query-generated iptables forwarding rules (the last line)

# Iptables -nvL -t nat

Chain KUBE-PROXY (2 references)
pkts bytes target prot opt ​​in out source destination
    2 120 REDIRECT tcp - * * 0.0.0.0/0 10.254.102.162 / * kubernetes * / tcp dpt: 443 redir ports 47700
    1 60 REDIRECT tcp - * * 0.0.0.0/0 10.254.28.74 / * kubernetes-ro * / tcp dpt: 80 redir ports 60099
    0 0 REDIRECT tcp - * * 0.0.0.0/0 10.254.216.51 / * webserver * / tcp dpt: 8080 redir ports 40689
 

Access test, http: //192.168.1.201: 40689 / info.php, refresh your browser to detect changes proxy backend defaults to random round-robin algorithm.

Third, the test procedure

1.pods automatic replication, destruction test, observe Kubernetes automatically maintain copies (6 copies)

Remove replicationcontrollers a copy fedoraapache

[Root @ SN2014-12-200 pods] # kubectl delete pods fedoraapache

I1219 23: 59: 39.305730 9516 restclient.go: 133] Waiting for completion of operation 142530

fedoraapache

[Root @ SN2014-12-200 pods] # kubectl get pods

NAME IMAGE (S) HOST LABELS STATUS
5d70892e-8794-11e4-970d-000c292f1620 fedora / apache 192.168.1.201/ name = fedoraapache Running
5d715e56-8794-11e4-970d-000c292f1620 fedora / apache 192.168.1.202/ name = fedoraapache Running
5d717f8d-8794-11e4-970d-000c292f1620 fedora / apache 192.168.1.202/ name = fedoraapache Running
5d71c584-8794-11e4-970d-000c292f1620 fedora / apache 192.168.1.201/ name = fedoraapache Running
5d71a494-8794-11e4-970d-000c292f1620 fedora / apache 192.168.1.202/ name = fedoraapache Running
 

# Automatically generates a copy of the results for 6 parts

[Root @ SN2014-12-200 pods] # kubectl get pods

NAME IMAGE (S) HOST LABELS STATUS
5d717f8d-8794-11e4-970d-000c292f1620 fedora / apache 192.168.1.202/ name = fedoraapache Running
5d71c584-8794-11e4-970d-000c292f1620 fedora / apache 192.168.1.201/ name = fedoraapache Running
5d71a494-8794-11e4-970d-000c292f1620 fedora / apache 192.168.1.202/ name = fedoraapache Running
2a8fb993-8798-11e4-970d-000c292f1620 fedora / apache 192.168.1.201/ name = fedoraapache Running
5d70892e-8794-11e4-970d-000c292f1620 fedora / apache 192.168.1.201/ name = fedoraapache Running
5d715e56-8794-11e4-970d-000c292f1620 fedora / apache 192.168.1.202/ name = fedoraapache Running
 

2. Test different roles modules hostPort

1) pod in hostPort empty and replicationcontrollers designated port, the exception; both sides designated port, or that do not have the same exception; pod of hostport specified, other replicationcon is empty, normal; pod of hostport empty, another replicationcon is empty, normal; the conclusion is not specified in replicationcontrollers hostport scene, otherwise an exception, continuing to be tested.

2) Conclusion: replicationcontronllers.json in, "replicaSelector": { "name": "webserver_pod"} to the "labels": { "name": "webserver_pod"} and service in the "selector": { "name" : "webserver_pod"} is consistent;
     
         
       
         
  More:      
 
- Java synchronization mechanism: synchronized, wait, notify (Programming)
- Linux file system data file deletion problem space is not freed (Database)
- Use HugePages optimize memory performance (Database)
- Linux system security infrastructure Highlights (Linux)
- Extended VMware Ubuntu root partition size (Linux)
- Ubuntu users install the Download Manager software Xdman 5.0 (Linux)
- Github inventory objects Algorithm (Linux)
- Disk storage structure and file recovery experiment (FAT file system) (Linux)
- Graphic Git (Linux)
- Use SecureCRT to transfer files between Linux and Windows (Linux)
- C / C ++ language usage summary of const (Programming)
- Configuration based on open source Lucene Java development environment (Server)
- Linux alpine use the command line to access Gmail (Linux)
- Using Python to find a particular file extension directory (Programming)
- VMware Workstation virtual machine Ubuntu achieve shared with the host (Linux)
- Iscsi package is installed on RHEL 6.3 x86-64 systems (Linux)
- Linux remote landing system (Linux)
- Migu online music player for Linux (Linux)
- Java singleton mode (Singleton mode) (Programming)
- Linux RAID Set RAID 10 or 0 + 1 (Linux)
     
           
     
  CopyRight 2002-2016 newfreesoft.com, All Rights Reserved.