Home IT Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Kubernetes cluster deployment     - PostgreSQL vacuum principle of a function and parameters (Database)

- Simple Calendar C language (Programming)

- Android Studio quick overview of Gradle (Programming)

- Ubuntu Slingscold (Linux)

- C ++ type conversion and RTTI (Programming)

- ORA-4031 error Solution (Database)

- Method under Linux GCC Compiler shared library function export control (Programming)

- The signature can not be verified under Debian (Linux)

- awk Programming Model (Programming)

- Java foundation comb: Array (Programming)

- Text editing and viewing text Linux command (Linux)

- Let Mac OS X dedicated high-speed mobile hard disk can also be read in Linux (Linux)

- HAproxy let IP recording back-end RS (Server)

- How to use Android Studio development / debugging Android source code (Linux)

- Use scripts easily install the latest Linux kernel in Ubuntu (Linux)

- Memcached source installation and configuration under CentOS 6.6 (Server)

- Ubuntu 12.04 installation OpenCV2.4.1 and compile test (Linux)

- CentOS use wget (Linux)

- Linux command line ten magic usage (Linux)

- Ubuntu How to install screen recording tool Simple Screen Recorder 0.3.1 (Linux)

 
         
  Kubernetes cluster deployment
     
  Add Date : 2016-09-26      
         
       
         
  Given docker so popular, Google launched kubernetes docker cluster management, a lot of people expected to try. kubernetes got a lot of big companies support, kubernetes cluster deployment tool is also integrated gce, coreos, aws and other iaas platform is quite easy to deploy. Given the many online information is based on a lot of the old version, this article briefly describes kubernetes for deploying the latest components and their dependencies. This article can be compared by rough run your kubernetes cluster, to elegant needed more work. Deployment is divided into three steps:

1. Prepare the machine and open up the network

If you are deploying a cluster of at least three machines kubernetes, as a master as two minion. If you have four machines can also be used as a service etcd if more can deploy a cluster etcd and more minion, where 4 machines as an example here that the machine can be a physical machine can also be kvm virtual machines. List of machines:

master: 10.180.64.6
etcd: 10.180.64.7
minion1: 10.180.64.8
minion2: 10.180.64.9
As for the network can be used flannel, or openvswitch, a lot of this information online, you can google or baidu next.
2, deployment-related components

kubernetes installation is divided into three parts: etcd cluster, master node and minions.

In this paper, in order to facilitate 4 Case Cloud Hosting kubernetes build a cluster, cloud host machine allocation is as follows:

ip

Character

10.180.64.6

Kubernetes master

10.180.64.7

Etcd node

10.180.64.8

Kubernetes minion1

10.180.64.9

Kubernetes minion2

2.1 etcd Cluster

      In this example, as a cloud host etcd node, cluster, refer to the follow-up for etcd etcd be built using presentation.

root @ cnsdev-paas-master: ~ # curl -L https://github.com/coreos/etcd/releases/download/v2.0.0-rc.1/etcd-v2.0.0-rc.1-linux-amd64 .tar.gz-o etcd-v2.0.0-rc.1-linux-amd64.tar.gz

root @ cnsdev-paas-master: ~ # tar xzvf etcd-v2.0.0-rc.1-linux-amd64.tar.gz

root @ cnsdev-paas-master: ~ # cdetcd-v2.0.0-rc.1-linux-amd64

      Under etcd copy of all executable files to / under the bin

2.2. Master node

      The master node involves only kubernetes install, first download kubernetes run the following command.

root @ cnsdev-paas-master: ~ # wget https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v0.8.0/kubernetes.tar.gz

root @ cnsdev-paas-master: ~ # tar -zxvfkubernetes.tar.gz

root @ cnsdev-paas-master: ~ # cdkubernetes / server / kubernetes

root @ cnsdev-paas-master: ~ # tar -zxvfkubernetes-server-linux-amd64.tar.gz

root @ cnsdev-paas-master: ~ # cd server / bin

      In the master node will kube-apiserver, kube-controller-manager, kube-scheduler, kubecfg, kubectl copy to / bin

2.3. Minion node

      minion nodes involved kubernetes, cadvisor and docker installation, install the master is already downloaded kubernetes, will extract out kubelet kube-proxy and copied to all minion.

      In minion node will kubelet, kube-proxy copy to / bin.

(Ps: do not copy copy to / bin does not matter, the path to the executable file can be added to the $ PATH)

      Installation cadvisor:

root @ cnsdev-paas-master: wget https://github.com/google/cadvisor/releases/download/0.7.1/cadvisor

      It is an executable file directly, without decompression, and copy it to / bin

      Installation docker:

      Mounted on a minion docker, kubernetes create pod docker api calls as a container worker, while kubernetes own agent threading can also run docker inside, so kubernetes upgrade will be more easy.

      debian 7 installed docker can use the Ubuntu repositories, run the following command:

root @ cnsdev-paas-master: echo debhttp: //get.docker.io/ubuntu docker main | sudo tee / etc / apt / sources.list.d / docker.list
root @ cnsdev-paas-master: apt-key adv - keyserver keyserver.ubuntu.com --recv-keys36A1D7869245C8950F966E92D8576A8BA88D21E9
root @ cnsdev-paas-master: apt-getupdate
root @ cnsdev-paas-master: apt-getinstall -y lxc-docker

      Running about dockerversion see if normal.

3, run kubernetes cluster


3.1. Kubernetes profile

This section relates to the configuration file and do not necessarily coincide in the GCE and through kubernetes profile yum installed a temporary solution for full manual installation, if you have integrated into the cluster under kubernetes project, you can run a key kubernetes own deployment using salt deploy the entire cluster, without manual operation, so here profile is only available for deployment on a platform not yet supported. All the necessary configuration files and execute scripts packaged as kube-start.tar.gz.

3.1.1. Etcd profile

      etcd profile for cfg-etcd:

ETCD_NAME = "- nameetcd-1"

etcd node name, if etcd only one cluster node, this one can not comment configure the default name for the default, this name will be used later.

ETCD_PEER_ADDRESS = "- initial-advertise-peer-urls http: // hostip: 7001"

node communication addresses etcd cluster, designated generally 7001 or 2380 port, where etcd node of ip is 10.180.64.7, so this one configuration changes to http://10.180.64.7:7001.

ETCD_CLIENT_ADDRESS = "- advertise-client-urls http: // hostip: 4001"

Etcd node address external services, and generally designated port 4001 or 2379, as modified here http://10.180.64.7:4001.

ETCD_DATA_DIR = "- data-dir / home / data / etcd"

Etcd directory storing data, their designated, different directory configuration will result in the same etcd different clusters.

ETCD_LISTEN_PEER_ADDRESS = "- listen-peer-urls http://0.0.0.0:7001"

etcd node address to listen, if 0.0.0.0 will listen on all interfaces, here configured as http://0.0.0.0:7001.

ETCD_LISTEN_CLIENT_ADDRESS = "- listen-client-urls http://0.0.0.0:4001"

Foreign Service listening address, configured as http://0.0.0.0:4001.

ETCD_CLUSTER_MEMBERS = "-initial-clusteretcd-1 = http: // ip_etcd-1: 7001 etcd-2 = http: // ip_etcd-2: 7001"

Etcd cluster members address list, because of etcd cluster, so the need to specify port 7001 or 2380, there is only one node, and is not configured ETCD_NAME, then the default name for the default, here configured default = http: //10.180.64.7: 70001.

ETCD_CLUSTER_STATE = "- initial-cluster-statenew"

etcd cluster status, new indicates a new cluster, existing representation already exists.

ETCD_ARGS = ""

Need to add additional parameters, you can add your own, all parameters can be obtained by etcd -h etcd view.

3.1.2. Kubernetes cluster configuration file

      cfg-common:

KUBE_ETCD_SERVERS = "--etcd_servers = http: //10.180.64.7: 4001"

etcd service address, the service has already been launched etcd, here configured as http://10.180.64.7:4001.

KUBE_LOGTOSTDERR = "- logtostderr = true"

An error log to a file or output to stderr.

KUBE_LOG_LEVEL = "- v = 0"

Log level.

KUBE_ALLOW_PRIV = "- allow_privileged = false"

Privileges allowed to run container.

3.1.3. Apiserver profile

      cfg-apiserver:

KUBE_API_ADDRESS = "- address = 0.0.0.0"

Listening interface, if only to listen to 127.0.0.1 localhost, configured as 0.0.0.0 will listen on all interfaces, here configured as 0.0.0.0.

KUBE_API_PORT = "- port = 8080"

apiserver listening port, default 8080 without modification.

KUBE_MASTER = "- master = 10.180.64.6: 8080"

apiserver service address, controller-manager, scheduler and kubelet will be used in this configuration, configured here as 10.180.64.6:8080

KUBELET_PORT = "- kubelet_port = 10250"

kubelet minion listening on port 10250 by default, without modification

KUBE_SERVICE_ADDRESSES = "- portal_net = 10.254.0.0 / 16"

The kubernetes ip range that can be allocated, kubernetes start of each pod and serveice assigned an ip address assigned from this range.

KUBE_API_ARGS = ""

It requires additional configuration item to add, simply do not need to enable a cluster configuration.

3.1.4. Controller configuration file

      cfg-controller-manager:

KUBELET_ADDRESSES = "- machines = 10.180.64.8,10.180.64.9"

kubernetes cluster minion list, here configured as 10.180.64.8,10.180.64.9

KUBE_CONTROLLER_MANAGER_ARGS = ""

The need to add additional parameters

3.1.5. Scheduler profile

      cfg-schedule:

      If additional parameters can add their own, here temporarily add new parameters.

3.1.6. Kubelet profile

      cfg-kubelet:

KUBELET_ADDRESS = "- address = 10.180.64.8"

minion listening addresses, each minion based on actual ip configuration here 10.180.64.8 on minion1, on minion2 is 10.180.64.9.

KUBELET_PORT = "- port = 10250"

Monitor port, not to modify, if modified, also need to modify the configuration file on the master configuration items involved.

KUBELET_HOSTNAME = "- hostname_override = 10.180.64.8"

Name minion of kubernetes see, when you use kubecfglist minions will see is the name instead of the hostname, ip address settings, and the same ease of identification.

KUBELET_ARGS = ""

Additional parameters

3.1.7. Proxy configuration file

      cfg-proxy:

      If additional configuration parameters yourself, there is no need to add.

3.2. Kubernetes start

The kube-start.tar.gz decompression, copy cfg-etcd, kube-etcd to etcd node, increase the executable permissions for kube-etcd. A copy of which cfg-common, cfg-apiserver, cfg-controller-manager, cfg-schedule, apiserver, controller, schedule copied to the master, as apiserver, controller and schedule increases executable permissions. Copy cfg-common, cfg-kubelet, cfg-proxy, cadv, kube, proxy to all hosts minion, while ensuring that each minion of cfg-kubelet modify correct for cadv, kube, proxy increase the executable permissions.

First on etcd node running etcd service execution

root @ cnsdev-paas-master: ./ kube-etcd &

Etcd test is normal, executed on the master

root @ cnsdev-paas-master: curl -L http://10.180.64.7:4001/version

etcd 2.0.0-rc.1

Then the order of execution in the master

root @ cnsdev-paas-master: ./ apiserver &

root @ cnsdev-paas-master: ./ controller &

root @ cnsdev-paas-master: ./ schedule &

Finally, the order of execution on all nodes

root @ cnsdev-paas-master: ./ cadv &

root @ cnsdev-paas-master: ./ kube &

root @ cnsdev-paas-master: ./ proxy &

After all the components are running to detect the state of the master.

View next cluster status

root @ cnsdev-paas-master: ~ # kubecfg listminions

Minionidentifier Labels

---------- ----------

10.180.64.9

10.180.64.8

It can be seen that there are two cluster nodes 10.180.64.8 and 10.180.64.9, 2 nodes is the deployment.

View the current pod cluster

root @ cnsdev-paas-master: ~ # kubecfg list pods
Name Image (s) Host Labels Status
---------- ---------- ---------- ---------- ----------
e473c35e-961d-11e4-bc28-fa163e8b5289 dockerfile / redis 10.180.64.9/ name = redisRunning

Here redis you did not see, if you just created the cluster at this time there is no pod, of course, if you are on a key aws gce or created, the default may see kubernetes named pod, which is enabled by default condition monitoring Eastern.

The cluster has been created, it is to create a tomcat's replicationController play it. There are several ways to achieve this the interface, select here json, need to write a tomcat-controller.json kubernetes file tells how to create this controller. Of course, the name of the file can be point, just be able to read the line. tomca-controller.json probably long like this:

{

  "Id": "tomcatController",

  "Kind": "ReplicationController",

  "ApiVersion": "v1beta1",

  "DesiredState": {

    "Replicas": 2,

    "ReplicaSelector": { "name": "tomcatCluster"},

    "PodTemplate": {

      "DesiredState": {

        "Manifest": {

          "Version": "v1beta1",

          "Id": "tomcat",

          "Containers": [{

            "Name": "tomcat",

            "Image": "tutum / tomcat",

           "Ports": [{

              "ContainerPort": 8080, "hostPort": 80}

           ]

         }]

        }

      },

      "Labels": { "name": "tomcatCluster"}}

    },

  "Labels": {

    "Name": "tomcatCluster",

  }

}

Meaning inside the value realized after reading kubernetes analysis will understand. After the file is written to let kubernetes execute it.

root @ cnsdev-paas-master: / home / pod # kubecfg -ctomcat-pod.json create replicationControllers

If I tell you success, you can view the next cluster controller

root @ cnsdev-paas-master: / home / pod # kubecfg listreplicationControllers

Name Image (s) Selector Replicas

---------- ---------- ---------- ----------

redisController dockerfile / redis name = redis 1

tomcatController tutum / tomcat name = tomcatCluster 2

Please disregard redis, this time to see the tomcat replicationController has been up, Replicas = 2 indicates to run two docker in clusters inside mirror docker run is tutum / tomcat, if the above is not your minion this image will then kubernetes go up docker hub for you to download, and if the local has this mirror so kubernetes directly run two tomcat on the container for your minion (pod), take a look at all of this is not true.

root @ cnsdev-paas-master: / home / pod # kubecfg listpods

Name Image (s) Host Labels Status

---------- ---------- ---------- ---------- ----------

643582db-97d1-11e4-aefa-fa163e8b5289 tutum / tomcat 10.180.64.9/ name = tomcatCluster Running

e473c35e-961d-11e4-bc28-fa163e8b5289 dockerfile / redis 10.180.64.9/ name = redis Running

64348fde-97d1-11e4-aefa-fa163e8b5289 tutum / tomcat 10.180.64.8/ name = tomcatCluster Running

See more use of the interface section.
     
         
       
         
  More:      
 
- Zabbix Agent for Linux Installation and Configuration (Server)
- Ubuntu deploying Solr (4.4) to Tomcat (7.0.53) (Server)
- Linux fast set ip bond (Linux)
- Apache Kafka: the next generation of distributed messaging system (Server)
- How to make GRub instead of the default Ubuntu software center (Linux)
- Vagrant build LNMP environment (Server)
- Talking about modern programming language syntax and standard library tightly bound phenomenon (Programming)
- Elaborate 10-point difference between the new and malloc (Programming)
- The security configuration of Linux (Linux)
- 20 Advanced Java interview questions summary (Programming)
- Ubuntu is not in the sudoers file problem solving (Linux)
- How to extend / remove swap partitions (Linux)
- How to Install Xombrero 1.6.4 (minimalist Web browser) on Ubuntu and Archlinux (Linux)
- How to install Ubuntu California - the calendar application (Linux)
- Object-oriented language Java some of the basic features (Programming)
- How to find the available network adapter on Ubuntu (Linux)
- Some Linux networking tools you might not know (Linux)
- Keepalived achieve high availability Nginx Reverse Proxy (Server)
- Open MySQL slow query log (Database)
- Number JavaScript type system (Programming)
     
           
     
  CopyRight 2002-2016 newfreesoft.com, All Rights Reserved.