Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Use Nginx as a load balancer     - To help you easily protect the Linux System (Linux)

- How to use the process on the desktop xkill end Linux (Linux)

- Oracle 11g 10g induced into error (Database)

- Binary search and modification (Programming)

- Git Advanced Tutorial (Linux)

- C ++ 11 feature: decltype keywords (Programming)

- Linux using TCP-Wrapper Service Management (Linux)

- Installed in the desktop version of Ubuntu Unity Tweak Tool (Linux)

- Storm how to assign tasks and load balancing (Programming)

- Ubuntu U disk do not have write privileges can only read but not write (Linux)

- Hands to teach you to solve Ubuntu error message (Linux)

- DRBD + Heartbeat solve NFS single point of failure (Server)

- Detailed Linux su command to switch users Mistakes (Linux)

- The three-way division of the sorting algorithm Quicksort (Programming)

- Apple Mac computer to install Windows 10 Concise Guide (Linux)

- Mac OS X system setup Google Go language development environment configuration tool Sublime Text 2 (Linux)

- Migrate Oracle database files to ASM (Database)

- linux firewall configuration (Linux)

- To delete the directory and all specified files under the Mac (Linux)

- Why not use the ifconfig command under RedHat Linux 5 (Linux)

 
         
  Use Nginx as a load balancer
     
  Add Date : 2016-07-25      
         
         
         
  Load balancing across multiple application instances is common technique for optimizing the use of resources, increase throughput, reduce delays, and to ensure fault tolerance configuration.

It is possible to use Nginx as an efficient HTTP load balancer to distribute requests to different application servers, improve web application performance, scalability and reliability.

1, load balancing methods

Nginx supports the following load balancing strategy

round-robin requests to the application server in polling mode

At least-connected one request to the server with the fewest number of active connections.

ip-hash a hash function to determine which server to process the next request, based on the client's IP address.

2, the default load balancing configuration

The easiest nginx load balancing looks like this

http {

      upstreammyapp {

              serversrv1.example.com;

              serversrv2.example.com;

              serversrv3.example.com;

}

server {

      listen 80;

      location / {

              proxy_pass http: // myapp

}

}

}

The above example, there are three instances of the same application running on srv1-srv3. When the load balancing method is not specified, the default round-robin. All requests are proxied to myapp server group, nginx load balancing methods to distribute applications HTTP requests.

In nginx reverse proxy implemented including HTTP, HTTPS, FastCGI, uwsgi, SCGI and memcached load balancing.

HTTPS want to configure load balancing, only you need to use https as the protocol.

When configuring load balancing FastCGI, uwsgi, SCGI or memcached of time, the use fastcgi_pass, uwsgi_pass, scgi_pass and memcached_pass instructions.

3, at least connection load balancing

Another load balancing methods are least-connected. Load using least-connected control applications relatively fair way when you need a long time to complete the request.

Using the least-connected load balancing methods, nginx will not increase the load on the request in the server busy, but not busy distributing new request to the server above.

least-connected in nginx load balancing methods, when least conn command for server configuration section group is active upstream myapp1 {

least_conn;

    serversrv1.example.com;

    serversrv2.example.com;

    serversrv3.example.com;

    }

4, the session remains

Please keep in mind that using a round-robin or least-connected mode, subsequent client requests are likely to be distributed to different servers. We can not guarantee that the same client always point to the same server.

If you need to bind the client to a particular application server, which is to have the client's answer, stick, durable, it has been to select a specific server. You can choose to use the ip-hash load balancing methods.

Use ip-hash, IP address of the client is used as a hash key, determines which server is selected to handle the client's request. This approach ensures that the same client's request has been directed to the same server, unless the server is unavailable.

To configure the IP-hash load mode, only need to add a server group configuration command ip_hash inside.

upstream myapp1 {

  ip_hash;

  server srv1.example.com;

  server srv2.example.com;

  server srv3.example.com;

}

5, the weight load balancing

Use server weights to influence nginx load balancing algorithm is possible.

In the above example, the server weights are not configured, which means that all servers are treated equally known for a particular load balancing algorithm is.

Use round-robin, then it also means that the server will distribute between the same number of requests.

When weight is applied to the parameters specified when the server, weight can also be seen as part of the load balancing decisions.

upstream myapp1 {

    serversrv1.example.com weight = 3;

    serversrv2.example.com;

serversrv3.example.com;

}

With this configuration, then every five new request arrives, the request will be distributed to three srv1, one for srv2, one for srv3.

Use weight in the newer versions of nginx in ip-hash and least-connected load balancing methods are also possible.

6, health check

In nginx reverse proxy to achieve when the health check can also be achieved, if the server response error, the server nginx will mark faild, then try to avoid the selection of the server to handle subsequent requests.

max_fails directive sets the attempt to access the server fail_timeout maximum number of failures, the default is set to 1. When set to 0 max_fails time, the server will not be a health check. fail_timeoutcanshu defined How long the server is marked as failed. After fail_timeout interval, with the failure of the server, ngixn will try to use gentle client requests the server probe, if the probe is successful, then the server is marked as alive.
     
         
         
         
  More:      
 
- Use the Find command to help you find those files that need to be cleaned (Linux)
- Default permissions Linux file and directory permissions and hide - umask, chattr, lsattr, SUID, SGID, SBIT, file (Linux)
- Common Linux system performance monitoring command (Linux)
- Linux processes in memory and memory cgroup statistics (Linux)
- Squid proxy server configuration under Linux (Server)
- Linux 0.12 kernel and modern kernels difference in memory management (Linux)
- Ubuntu 14.10 Apache installation and configuration (Server)
- ASM learning overview (Database)
- Examples of testing and installation Mesos on CentOS (Linux)
- Python: Finding meet the conditions specified in the file directory (Programming)
- VMware 8 installation configuration Win7, CentOS-7 Wizard (Linux)
- GAMIT10.5 install and update failed Solution (Linux)
- Number JavaScript type system (Programming)
- Analysis: Little Notebook facing a major security threat secure online (Linux)
- Linux server is how to do after the invasion (Linux)
- SSH without password Definitive Guide (Linux)
- New experience Budgie (Budgerigar) desktop environment (Linux)
- Physical structure and process disk IO (Linux)
- Hanoi problem Java Solution (Programming)
- ActiveMQ5.11.1 and JDK version matching relation (Linux)
     
           
     
  CopyRight 2002-2020 newfreesoft.com, All Rights Reserved.