Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ How to build a container cluster     - PostgreSQL query lock and kill the process (Database)

- Physical structure and process disk IO (Linux)

- Top command: the Task Manager under linux (Linux)

- How to fix Not Enough Free Disk Space On / boot on Ubuntu (Linux)

- How to make GRub instead of the default Ubuntu software center (Linux)

- DELL D630 Wireless LAN Driver Installation CentOS6 (Linux)

- How x2go set up Remote Desktop on Linux (Linux)

- Docker build their own private warehouses (Linux)

- sed and awk in shell usage and some examples (Linux)

- Using Python to find a particular file extension directory (Programming)

- Hadoop 2.2.0 installation development environment (standalone pseudo-distributed mode) (Server)

- How to use Git to upload code to GitHub project (Linux)

- HomeKit Human Interface Guidelines (Linux)

- Linux system security Comments (Linux)

- ElasticSearch - Basic Concepts (Server)

- How to install Ubuntu applications Device 7 (Linux)

- MySQL 5.6.26 source install (Database)

- Packages with Snort intrusion monitoring light (Linux)

- Rely on IP chain established Linux firewall (Linux)

- Zabbix installation and configuration process (Server)

 
         
  How to build a container cluster
     
  Add Date : 2016-11-19      
         
         
         
  This is the Google of container technology blog The second series, first article outlined the container, Docker, and the basic concepts Kubernetes of this article for Kubernetes were relatively in-depth introduction, the author of some of the core concepts Kubernetes start, Google introduced in the construction of a container cluster management system, some of the core elements.

Last week, experts from the "Google Cloud Platform Global Solution Group" MilesWard done for us on opening the container technology blog series, the previous blog post MilesWard generally introduces some basic concepts of container, Docker and the Kubernetes . If you have not read the previous article, I suggest you first conduct some understanding of this and you can add some related basic knowledge, but also will help you better understand the content described in this article.

This week, we invited a Google senior engineer, is also a core member of the project Kubernetes Joe Beda. He will introduce us to some of Google's core technical concepts in the container during use technology on a deeper level. These concepts are also Kubernetes create group basis, understanding these concepts can help us better understand the subsequent articles in this series of blog.

How to build a container cluster?

Recently, the rapid rise of container system and related technology has been widespread concern (eg Docker). Container technology has brought us a lot of exciting practice. Container packaging, migration, and running the service in a different environment, the ability, you can easily let us manage their own services, from another perspective, this also helps us to improve the "mobility" service. However, as users continue to migrate their services to a production environment, new problems have also appeared, such as specific containers which run on which server, how to run a large number of containers at the same time, how easily between the container cross-host communications, etc., it is these problems, prompted us to build Kubernetes. Kubernetes is one from Google's open source toolkit that can help us address these emerging issues.

As in the last article we discussed above, we believe that Kubernetes is a "container cluster manager." Many technicians are accustomed to the project in this area is called "filing system (orchestration systems)", they may want to cluster management likened symphony Arr. But I never understood, Symphony (Orchestral music) arrangement is usually done in advance in accordance with good melodies and meticulous choreography and music, and before the show, each performer's task has been explicitly specified good. The management process Kubernetes cluster is more like an upgraded version of jazz improvisation. It is a dynamic system that can respond in real time based on the input information and the current system operating environment.

Therefore, we must ask, in the end what factors help us build a container cluster? Whether this can be described in a cluster system: This is a dynamic system, the system can be placed in multiple containers, these containers of communication between the state and a container system can be monitored. True, it is a cluster consists of a vessel monitoring system and a range of computing nodes (either physical servers or virtual machines) thereof. In the remainder of this article, we focused on the three aspects of the topic: what constitutes a cluster container, container clusters should be applied to what our real work, as well as the composition of the various elements of the container cluster and how together play a role . In addition, based on our existing experience, a vessel should also contain a cluster management, we will continue to explore how this management is realized.

Why run container clusters way?

At Google, we constructed container clusters need to meet a series of common requirements: a cluster are always available and can be patched and upgraded cluster demand scalability, clustering related indicators likely to be measured (easily instrumented) and monitoring, etc. . According to the characteristics of the container itself, the service can quickly and easily be deployed, and also the entire service into many small parts to a more fine-grained operations. Although the container of the operation to some extent, provides us with a convenient, but in order to meet these objectives we propose, we still need a systematic solution to manage container cluster.

In Google the past 10 years, we have found a container cluster manager to meet these requirements gallery, and the cluster manager can also provide us many other benefits:


Development through micro-services model, it can make the entire development process easier to manage. Cluster Manager allows us to put a complete service into many smaller parts, these small portions can be separated from each other, respectively, manage and expand. This allows us in the software development phase, according to the complexity of the service to organize our development team, good clean interface to allow different small development team by specifying co-development.

Faced with failure when the system self-healing. When a server fails, the cluster manager can automatically restart tasks that run on the server before failing on a healthy server.

Horizontal scaling easier. A container cluster can provide tools for horizontal expansion, for example, if you want to add more computing power, only need to modify the settings (repeat count) can be achieved.

High utilization and high efficiency. Google After the migration service to the container, to a great extent to increase the utilization and efficiency in the use of resources.

Operation and maintenance team role clusters and service has changed. Developers can focus more on the service they provide, rather than focus on the underlying support infrastructure on. For example, Gmail's operation and maintenance and development teams (operations and development teams) and almost no operating cluster operations teams to communicate directly to complete their work, this separation of concerns allows operations teams to play a greater role.
Now, we understand that what we are currently doing makes sense, so let us explore together constitute an excellent cluster management system which required elements in the end, and if you want to recognize the advantage to run the cluster container, What should be of particular concern.

A feature: Dynamic allocation container

Want to build a successful container cluster, you need a little "jazz improvisation skills." You need to pack your task into a container mirror task and clearly state your intentions, to explain how to run the container and the container will run where. Cluster Management System will eventually be decided in the end where your task is running, we call this process "cluster scheduling."

This does not mean that the task will be randomly distributed over the computing nodes. On the contrary, when the workload is assigned, it is necessary to follow a series of strict restrictions, from a computer science perspective will, which would make the cluster is scheduled to become an interesting and difficult problem (Note 1). When the need to schedule when the scheduler determines the amount of work you want to put a sufficient space (such as CPU, RAM, I / O, memory) on a virtual machine or physical server. However, in order to meet reliability goals, the dispatcher may need to form a series of tasks to be distributed across hosts or arranged in a certain order (racks in order), in order to reduce the possibility of failure associated runtime . Or some special tasks will be assigned on some special hardware (such as GPU, local SSD, etc.) of the machine. The scheduler will respond to changing operating environment. And should be re-scheduling tasks in the task running time of failure, increase / reduce cluster size to improve efficiency. To achieve this goal, we encourage users to avoid a container mounted on a server. Sometimes you may need to specify "I want a container to run on a machine." However, this situation should be relatively rare.

The next question is: What is the specific object of our operation is to be scheduled? The simplest answer is to use a separate container. But at some point, you want to have a series of containers on a host running a cooperative manner. For example, a data loader, requires a database service to run together or a log compressor / saver process also need to run a service to match. Container to run these services usually need to be put together, and you need to make sure that they have not been separated in the process of dynamic configuration. For this purpose, we introduce a concept in the mid-single Kubernetes: pod. A pod is a set of containers, the containers together to form a unit in the server (it may also be referred Kubernetes node) and is arranged on the scheduling. To make each can configure multiple pod, Kubernetes using a reliable way to pack in a lot of work on a node.

Two elements: the manner set thinking

When working on a separate physical nodes, a general tool is not usually in batches of container operation. However, when working on a container cluster, you might want to be able to easily cross the junction extension service. To achieve this goal, you need to set the way of thinking, rather than as before in accordance with singleton consideration. And you also want to set these containers can be configured easily by the way. In Kubernets, we introduced two additional concepts to manage a series of pod: label and replication controllers.

Kubernets Each pod has a set of key-value pair therewith key bindings, we called for this key labels. You can specify a query based on these labels by building, to filter out a series of pods. Kubernets organization does not have a so-called pod "right way." It all depends on the user, as long as the organization is suitable for the user is appropriate. Users can layer structure of an application to organize and to be organized according to geographic location, or deployment environment, and so on. In fact, because labels are non-hierarchical (non-hierarchical), a variety of ways you can organize your pod.

For example: say you have a simple service that contains both the front and rear two levels. At the same time you have a different environment: the test environment, the delivery environment (staging environment) as well as the production environment. You can use multiple tags to mark your pod, pod such as front-end production environment can be marked as: env = prod, tier = fe Meanwhile, the pod can be used in production environments backend marked env = prod, tier = be . Similarly, you can also follow a similar method to mark the pod you use in testing and delivery environment. Then, when the user needs to operate or check the cluster, you can limit the operating range labeled env = prod the pod, so that you can see in a production environment front and rear ends of the pod. Or you want to see your front-end environment, then only need to find marked as tier = fe the pods, so you can see across the testing, production and delivery tip pods three different environments. As you add more levels and different operating environment, you can follow their own way to conceive and plan their own way to define this system, to better meet your needs.

Spread

Since we already have a similar configuration to the physical server resource pool to identify and maintain before. We can refer to this feature to extend the container cluster level (ie "scaling out"). To make this step easier, we maintain a helper object Kubernets, we called replication controller. It maintains a pool of resources there are pods, there are some attributes used to describe the resource pool, including the expected expansion of the number of replication count, as well as a pod and a template for selection / query label. The principle of this object is actually not difficult to understand, the following pseudo-code:

object replication_controller {

property num_replicas

property template

property label_selector


runReplicationController (num_desired_pods, template, label_selector) {

loop forever {

  num_pods = length (query (label_selector))

  if num_pods> num_desired_pods {

    kill_pods (num_pods - num_desired_pods)

  } Else if num_pods
    create_pods (template, num_desired_pods - num_pods)

  }

}

}

}

The above code analysis, for example, you want to use three pod to run a php front end, you might use an appropriate template pod (pointing to your php container mirror) create a replication controller. Num_replicas which is 3. You may query by a label env = prod, tier = fe to locate a series of pod set, then the replication controller will operate on objects you find these pod set. In this way replication controller will be readily appreciated cluster contraction / expansion expected after state, it will continue until the cluster is adjusted to achieve a final state. If you want to narrow or expand the size of your service, all you need to do is change expected replicaiton count, replication controller will deal with the remaining issues. By focusing on the expected state of the system, we make this problem has become easy to handle.

Three elements: connected to the communications between the cluster service

Several features listed above you have available to do some very interesting things. Any highly parallel task distribution system (continuous integration systems, video decoding, etc.) at work, do not do a lot of interaction between them pod. However, most of the more complex services is small (micro) network services configuration, the need for a lot of interaction between them pod, according to the traditional division of the application level, each layer is like a knot in FIG. point.

A cluster management system needs a name resolution system, this system can be used with several analytical elements described above to work together. Like DNS domain name to provide IP address resolution, this one naming service can be parsed into a target service name, as well as some additional requirements. In particular, when the operation of the system state changes, this change should soon be captured by the system, a "service name" should be able to resolve a series of targets, there may be additional information about these target element (eg fragment Quest shard assignment). For Kubernets API, this work by label selector and watch API (Note 2) to complete the pattern. This provides a very lightweight form of service discovery.

Most clients will not only want to use the new naming API advantage immediately rewrite (or never be overwritten) Most of the projects want to have a separate address and a port this can be another layer communications services, in order to compensate for this deficiency, Kubernetse introduced a service agent concept. This is a simple network load balancer / proxy, you can query the names and may be a single stable IP / port form (via DNS) on your network exposed to the user. Currently, the proxy polled do simple load balancing across all identified by a label selector backend. According to the plan, Kubernets want to allow custom proxies / ambassadors, so you can be more flexible decision-making for the specified domain (attention Kubernetes roadmap for more details). In fact, MySQL is also beginning to realize the ambassador role, it can know how to send written information flow master node, and the flow of information read read slave nodes.

to sum up

Now that you have learned more about the three key elements, namely Cluster Management System: Dynamic configuration of the container, the container in a set of connected thinking, cluster between services, how to play a role together.

At the beginning of this article, we raised the question: "How to build a container cluster in the end?" We hope that through the above article presented information and details, you already have the answer: in a nutshell, is a container cluster a dynamic system that can store and manage the container, the container together in the form of pod, and running on the node, but also includes a channel for internal connect and communicate with each other.

When we start building Kubernetes, our goal is: making the Google experience for container specific. Our initial focus only on scheduling and dynamic configuration of the container, however, when we fully understand that building a genuine service, a different system is absolutely necessary. We immediately found the other additional elements joined is absolutely necessary, such as: pods, labels and replication controller. In my opinion, these are definitely the least need to build a usable container Cluster Management System module.

Kubernetes is still evolving, but the current state of development is pretty good, we've just launched version v0.8, you can download it from here, we are still adding new features and we have to rebuild those functions. We also launched a roadmap v1.0, the project has already started, and a lot of communities are growing as a partner in contributing (as ReaHat, VMWare, Microsoft, IBM, CoreOS etc.) there are many users , they are in different environments to use Kubernetes.

Although we have a lot of experience in this area, but Google has a lot of problems there is no answer, there may be some special requirements and the particular need to consider that we did not realize at the moment, take this into account in the course of the cluster, please to participate in a number of projects we are building in the past: Try it out, file bug reports, ask for help or send a pull request (PR).

-Posted By Joe Beda, Senior Staff Engineer and Kubernetes Cofounder


Note 1: This is a traditional knapsack problem, under normal circumstances this is an NP-hard problem.
Note 2: "Watch API mode" is a method: It can be from one service to distribute asynchronous events in the usual lock systems and services that are common (zookeeper, etc.), this method is originally from Google Chubby papers . Essentially the client sends a request and "hang" until there is change. This is usually the client's request will add a version number information, so the client will have to keep up to date of any change.
     
         
         
         
  More:      
 
- How ONLYOFFICE collaborative editing document on Linux (Linux)
- Timing task Crontab under Linux system (Linux)
- independently configurable PHP environment under CentOS6.5 (Server)
- Nginx log cutting and MySQL script regular backup script (Server)
- The Java way to stop a thread of execution (Programming)
- To install and use Docker under linux (Server)
- GNU / Linux enable Intel Rapid Start (Linux)
- Oracle multi-table query optimization (Database)
- 10 Nginx safety tips (Linux)
- The difference between equals and == in Java (Programming)
- Parse Server supports iOS and Android push messaging (Programming)
- Linux-based Heartbeat high availability configuration httpd service (Server)
- Linux Getting Started tutorial: Borrow Windows fonts in Ubuntu 14.10 (Linux)
- Linux FTP setting file description (Server)
- Linux System Getting Started Learning: On Linux how to convert text files to PDF (Linux)
- Using BBED repair ORA-01190 error (Database)
- Linux platform NTOP Installation and Configuration (Linux)
- Unix average load average load calculation method (Server)
- RedHat Linux 6.4 installation RTL8188CUS wireless network card driver (Linux)
- Java inheritance initialization problem (Programming)
     
           
     
  CopyRight 2002-2020 newfreesoft.com, All Rights Reserved.