Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Nginx Installation and Configuration     - How to use Xmanager Remote Desktop and VNC Log (Linux)

- Oracle archive log full cause abnormal slow database performance (Database)

- Linux argument references and command substitution (Linux)

- Java reflection mechanism explained in detail and Method.invoke explanation (Programming)

- Security Knowledge: redirection command application security (Linux)

- Ubuntu under Spark development environment to build (Server)

- CentOS 7 version how to achieve the power to start the graphical interface (Linux)

- Ubuntu 14.10 How to install office suite Calligra Suite 2.8.7 (Linux)

- Why do you need close contact Rust 1.0 (Programming)

- FFmpeg compiled with only H264 decoding library (Programming)

- Udev: Device Manager for Linux Fundamentals (Linux)

- Check with Hello World Docker installation (Server)

- OpenStack image production in the CentOS 6.2 (Linux)

- Partition and file system under Linux (Linux)

- Linux - use chroot command (Linux)

- Source code compiled by the installation program under Linux (Linux)

- Build Docker based MongoDB replication cluster environment (Database)

- Java objects to garbage collection (Programming)

- CentOS 7 source code to compile and install Nginx process record (Server)

- How to set IonCube Loaders in Ubuntu (Linux)

 
         
  Nginx Installation and Configuration
     
  Add Date : 2018-11-21      
         
         
         
  First, Introduction

Process or thread-based model architecture of web services through each process or per-thread handle concurrent connection requests, which is bound to cause congestion in the network and I / O operations Traditionally, it is the inevitable result of another memory or CPU utilization on the low. Generate a new process / thread needs to prepare in advance of its runtime environment, which includes assigned heap memory and stack memory, and to create a new execution context like. These actions need to take up CPU, and excessive process / thread will create a thread jitter or frequent context switching, thus the system performance will decline further.

In the initial stages of design, the main focus nginx is its high-performance computing resources and the physical use of a high density, so it uses a different architecture models. Inspired by a variety of operating system design based on the "event" advanced processing mechanism, nginx uses a modular, event-driven, asynchronous, single-threaded and non-blocking architecture, and uses a lot of multiplexing and event notification mechanism. In nginx, the connection request from the process contains only one of the few worker threads in an efficient loop (run-loop) mechanisms for processing, and each worker can handle concurrent parallel connections and thousands of requests.

If the load to the main CPU-intensive applications, such as SSL or compression applications, the number of worker should be the same number of CPU; if the load to the main IO-intensive, such as in response to a lot of content to the client, the worker should count is the number of CPU 1.5 or 2 times.

Nginx will demand multiple processes running simultaneously: a main process (master) and several work processes (worker), equipped with cache when there will cache loader process (cache loader) and Cache Manager processes (cache manager), etc. . All processes are containing only a single thread, and mainly through the mechanism of "shared memory" to achieve inter-process communication. Main processes to run as root, and the worker, cache loader and cache manager should run as non-privileged user.

The main process performs the following tasks:

1. Read and test positive configuration information;

2. Create, bind and close the socket;

3. Start, termination and the number of maintenance worker process;

4. The suspension of service without reconfiguring the operating characteristics;

5. The control of non-interrupt programming upgrades enable the new binaries and, if need to roll back to the old version;

6. Re-open the log file, log rolling to achieve;

7. Compile embedded perl script;

worker process main tasks include:

1. Receive, and processes incoming connections from the client;

2. To provide reverse proxy and filtering;

3. nginx to complete any other tasks;

cache loader process main tasks include:

1. Check the cache store cached objects;

2. Use the metadata cache memory to establish a database;

The main task of cache manager process:

1. Failure cached and expired inspection;

Nginx configuration has several different contexts: main, http, server, upstream and location (mail service also achieve a reverse proxy mail). Configuration format and syntax is defined to follow the so-called C-style, and therefore supports nested, there is a clear logic and easy to create, read, and maintenance and other advantages.

Nginx code is composed of a core and a series of modules, the core is mainly used to provide the basic functions of Web Server, Web and Mail as well as reverse proxy functionality; also used to enable network protocols, creating the necessary runtime environment and ensuring smooth interaction between the different modules. However, most of the agreements with certain related functions and application-specific functions are implemented by the nginx module. These modules can be roughly divided into the event module, stage processor, output filters, variable processors, protocols, upstream and load balancing several categories, which together form the nginx http function. Event module is mainly used to provide OS independent (different operating system's event mechanism is different) event notification mechanism such as kqueue epoll or the like. Protocol module is responsible for implementing nginx establish a session through http, tls / ssl, smtp, pop3 and imap corresponding client.

Nginx communication between the interior processes by pipeline or chain module implemented; in other words, each function or operation by a module. For example, compression, communication upstream server, and establish a session with memcached or uwsgi via FastCGI protocol.

Second, the installation configuration

1.nginx installation and configuration

A, resolve dependencies

Compile and install nginx needs to implement install SDK group "Development tools" and "Server Platform Development", "Desktop Platform Development"

B, download, compile, install nginx

[Root @ node1 ~] # wget http://nginx.org/download/nginx-1.6.0.tar.gz

[Root @ node1 ~] # tar xf nginx-1.6.0.tar.gz

[Root @ node1 ~] # cd nginx-1.6.0

First, add a nginx, nginx it to run the service process

[Root @ node1 nginx-1.6.0] # useradd -r nginx

[Root @ node1 nginx-1.6.0] # ./configure \

> --prefix = / Usr / local / nginx \

> --sbin-Path = / usr / local / nginx / sbin / nginx \

> --conf-Path = / etc / nginx / nginx.conf \

> --error-Log-path = / var / log / nginx / error.log \

> --http-Log-path = / var / log / nginx / access.log \

> --pid-Path = / var / run / nginx / nginx.pid \

> --lock-Path = / var / lock / nginx.lock \

> --user = Nginx \

> --group = Nginx \

> --with-Http_ssl_module \

> --with-Http_flv_module \

> --with-Http_stub_status_module \

> --with-Http_gzip_static_module \

> --http-Client-body-temp-path = / var / tmp / nginx / client / \

> --http-Proxy-temp-path = / var / tmp / nginx / proxy / \

> --http-Fastcgi-temp-path = / var / tmp / nginx / fcgi / \

> --http-Uwsgi-temp-path = / var / tmp / nginx / uwsgi \

> --http-Scgi-temp-path = / var / tmp / nginx / scgi \

> --with-Pcre

Explanation:

1.Nginx can use Tmcalloc (fast, multi-threaded malloc library and excellent performance analysis tool) to speed up memory allocation, use this function needs to implement installation gperftools, then you can add --with-google_prtftools_module option when compiling nginx

2. If you want to use nginx's prel module, you can add --with-http_perl_module option to configure script to achieve, but this module is still in the experimental stage of use, there may be unexpected in the operation, and therefore its implementation here no longer introduced. If you want to use the cgi-based nginx, which can also be implemented based on FCGI, specific method, refer to the online documentation.

C, to provide nginx SysV init script:

[Root @ node1 ~] # vi /etc/init.d/nginx

#! / Bin / bash

#

# Nginx - this script starts and stops the nginx daemon

#

# Chkconfig: - 85 15

# Description: Nginx is an HTTP (S) server, HTTP (S) reverse \

# Proxy and IMAP / POP3 proxy server

# Processname: nginx

# Config: /etc/nginx/nginx.conf

# Config: / etc / sysconfig / nginx

# Pidfile: /var/run/nginx.pid

# Source function library.

. /etc/rc.d/init.d/functions

# Source networking configuration.

. / Etc / sysconfig / network

# Check that networking is up.

[ "$ NETWORKING" = "no"] && exit 0

nginx = "/ usr / local / nginx / sbin / nginx"

prog = $ (basename $ nginx)

NGINX_CONF_FILE = "/ etc / nginx / nginx.conf"

[-f / Etc / sysconfig / nginx] &&. / Etc / sysconfig / nginx

lockfile = / var / lock / subsys / nginx

make_dirs () {

  # Make required directories

  user = `nginx -V 2> & 1 | grep" configure arguments: "| sed 's / [^ *] * - user = \ ([^] * \) * / \ 1 / g.' -`

  options = `$ nginx -V 2> & 1 | grep 'configure arguments:'`

  for opt in $ options; do

      if [ `echo $ opt | grep '* - temp-path'`.]; then

          value = `echo $ opt | cut -d" = "-f 2`

          if [! -d "$ value"]; then

              # Echo "creating" $ value

              mkdir -p $ value && chown -R $ user $ value

          fi

      fi

  done

}

   
start () {

    [-x $ Nginx] || exit 5

    [-f $ NGINX_CONF_FILE] || exit 6

    make_dirs

    echo -n $ "Starting $ prog:"

    daemon $ nginx -c $ NGINX_CONF_FILE

    retval = $?

    echo

    [$ Retval -eq 0] && touch $ lockfile

    return $ retval

}

stop () {

    echo -n $ "Stopping $ prog:"

    killproc $ prog -QUIT

    retval = $?

    echo

    [$ Retval -eq 0] && rm -f $ lockfile

    return $ retval

}

restart () {

    configtest || return $?

    stop

    sleep 1

    start

}

reload () {

    configtest || return $?

    echo -n $ "Reloading $ prog:"

    killproc $ nginx -HUP

    RETVAL = $?

    echo

}

force_reload () {

    restart

}

configtest () {

  $ Nginx -t -c $ NGINX_CONF_FILE

}

rh_status () {

    status $ prog

}

rh_status_q () {

    rh_status> / dev / null 2> & 1

}

case "$ 1" in

    start)

        rh_status_q && exit 0

        $ 1

        ;;

    stop)

        rh_status_q || exit 0

        $ 1

        ;;

    restart | configtest)

        $ 1

        ;;

    reload)

        rh_status_q || exit 7

        $ 1

        ;;

    force-reload)

        force_reload

        ;;

    status)

        rh_status

        ;;

    condrestart | try-restart)

        rh_status_q || exit 0

            ;;

    *)

        echo $ "Usage: $ 0 {start | stop | status | restart | condrestart | try-restart | reload | force-reload | configtest}"

        exit 2

esac


  To this end the script execution permissions assigned, and added to the list of services, and to allow it to start automatically at boot

[Root @ node1 ~] # chmod + x /etc/init.d/nginx

[Root @ node1 ~] # chkconfig --add nginx

[Root @ node1 ~] # chkconfig nginx on


  2, configure Nginx

    Nginx code is to have a core and a series of modules, the core is mainly used to provide the basic functions of Web Server, Web and Mail has a reverse proxy functionality, it is also used to enable network protocols, as well as create the necessary environment to ensure that different run smooth interaction between modules performed, however, with most of the protocol related functions and an application-specific functions are implemented by the nginx module, these modules can be roughly divided into the event module, stage processor, an output filter , a variable processor, protocol, upstream and load balancing several categories, which together form the nginx http function. Event module is mainly used to improve OS independent (time mechanism is different for different operating systems) event notification mechanism such as kqueue epoll or the like. Protocol module is responsible for implementing nginx via http, tls / ssl, smtp, pop3 imap already established session with the corresponding client


  Nginx core modules and Main Events, also includes standard HTTP modules, optional modules and HTTP e-mail module, which also supports many third-party modules, Main configuration parameters for the error log, process and permissions, Event used to configure the I / O model, such as eopll, kqueue, select or poll, etc. they are essential module


  Nginx master profile has several end composition, this terminal is also commonly called the context nginx, each side define the format shown below should be noted that, each instruction must use a semicolon (;) end, otherwise, a syntax error


      
{

           ;

          }


  A, configure the main module


      The following describes main modules Several key parameters


      a, error_log


      Is used to configure the error log can be used for main, http, server, location and context: syntax error_log file | stderr [debug | info | notice | warn | error | crit | alert | emerg]


      If you use --with-debug option when compiling, debugging functionality can also be used as


      error_log LOGFILE [debug_core | debug_alloc | debug_mutex | debug_event | debug_http | debug_imap];

      To disable the error log, you can not use "error_log off", but the use of the following options similar error_log / dev / null crit

      b, timer_reslution

      To reduce gettomeofday () system call number, by default, each time from kevnet (), epoll, / dev / poll, select () or poll () returns will execute this system call. On x86-64 systems, gettimeday () the price is very small, you can ignore this configuration. The syntax is

      time_resolution interval

      c, worker_cpu_affinity cpumask .....

              Used to bind cpu only be used for main context. such as


                worker_processes 4

              worker_cpu_affinity 0001 0010 0100 1000;

        d, worker_priority


          Set priorities (designated nice) for the worker process, this parameter can be used only main context, the default is 0: syntax is

          worker_priority number

        e, worker_processes


          worker process is single-threaded processes. If nginx for cpu-intensive scenarios such as SSL or gzip, and the number of CPU on the host at least two, you should set this parameter value with the same number of CPU cores; if used in a large number of static files Nginx scene access, and the total size of all files larger than available memory, enough for a large value of this parameter should be set to take advantage of disk broadband

     This parameter Events context work_connections variables together determine the value maxclient:


          maxclients = work_processes * work_connections

        f, worker_rlimit_nofile

          Set the worker process to open the maximum number of file descriptors. The syntax is:


          worker_rlimit_nofile number


      B, configure the Events module


          a, worker_connections


          Set the maximum number of connections each worker handled, it worker_processes variables from the main context together to determine the value of maxclient:

          maxclients = work_processes * work_connections

          In a reverse proxy scenario, which is calculated with the above formula is different, because by default, the browser will open two connections, Nginx will open two file descriptors for each connection, therefore, the calculation method maxclients for:


          maxclients = work_processes * work_connections / 4

          b, use


              In the event there is more than one model IO application scenarios, you can use this command to set I / O mechanism nginx is used, the default mechanism for the ./configure script set the most suitable for the current OS version. Recommendations automatically selected by nginx. The syntax is:


          use [kqueue | rtsig | epoll | / dev / poll | select | poll | eventport]


    C, virtual server configuration


          server {

           ;

          }


          Is used to specify the server virtual machine-related properties, the commands have a backlog, rcvbuf, bind and sndbuf etc.


      D, location-related configuration


          location [modeifier] uri {...} or location @name {...}

          Typically used to server context to define access attributes of a URI. location can be nested.


  3, Nginx reverse proxy


      Nginx reverse proxy by proxy module functions. When a web reverse proxy server, nginx is responsible for receiving client requests, and according to URI, client parameters or other logic processing user requests to the scheduler (upstream server) upstream server. nginx most important instructions to achieve reverse proxy feature is porxy_pass, it is possible to define the location of a URI to the upstream proxy server (group) on. As in the following example, location of / uri will be replaced / bbc on the upstream server


      location / uri {

          proxy_pass http://www.abc.com:8080/bbc;

        }


      However, this exception handling mechanism, there are two, one is if the location of the URI is defined by pattern matching, the URI is passed directly to the upstream server and can not be converted to assign to another URI. For example, the following instance / bbs will be proxied to htt: //www.abc.com/bbs

        location ~ ^ / bbs {

          proxy_pass http://www.abc.com;

        }


      The second exception is if you use URL redirection location, then nginx will use the URL to process the request redirected, rather than consider the URI defined on the upstream server. As shown in the following example, the transfer to the upstream server URI to index.php? Page = , instead of / index.

      location / {

          rewrite /(.*)$ /index.php?page=$1 break;


          proxy_pass http: // localhost: 8080 / index;

        }


    Instruction A, proxy module


      Available configuration directives proxy module is very large, they are used in many property defines proxy module at work; such as connection timeout period, use the http proxy protocol version, etc., the following commonly used commands to do a simple explanation:


      proxy_connect_timeout: nginx sends a request to a maximum length of time to wait before the upstream server

      proxy_cookie_demain: The upstream server via Set-Cookie header set path genus modify the specified value, and its value as a string, the regular expression pattern or a reference variable


      proxy_hide_header: Setting header sent to the client packets need to hide


      proxy_pass: Specifies the request URL path of the proxy system upstream server


      proxy_set_header: upstream server to send packets to a header rewrite


      proxy_rediect: rewrite and refresh location received from the upstream server packet header


      When the longest interval long before disconnecting sent twice to the upstream server writes: proxy_send_timeout


      proxy_read_timeout: Two long accepted from accepting upstream server reads the maximum interval before disconnecting

As an example the following:

    proxy_redirect off;

    proxy_set_header Host $ host;

    proxy_set_header X-Real-IP $ remote_addr;

    proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;

    client_max_body_size 10m;

    client_body_buffer_size 128k;

    proxy_connect_timeout 30;

    proxy_send_timeout 15;

    proxy_read_timeout 15;


     B, upstream module

Used in conjunction with the proxy module module, commonly comes upstream module, upstream module can define a new context, he includes a set of upstream upstream servers, which can be given different weights, different types of settings can be based on maintenance reason marked down
upstream module commonly used commands are:
ip_hash: the completion of the request based on the client IP address distribution, it can guarantee the same request from the guest is always forwarded to the same upstream server
keepalive: Each worker process is sent to the server is connected upstream of the Austrian cached number
least_conn: Minimum connection scheduling algorithm
server: define an upstream server address, you can also include a number of optional parameters, such as
wegiht: Weights
max_fail: The maximum number of failed connections, failed connections when a timeout has long designated fail_timeout
fail_timeout: waiting for the request target server sends a response long
backup: for fallback purposes, all services start when the server failure
down: manual tagging that it is not dealing with any request

E.g:

    upstream backend {

      server www.abc.com weight = 5;

      server www2.abc.com:8080 max_fails = 3 fail_timeout = 30s;

    }


          Load balancing algorithm upstream module There are three main poll (round-robin), ip hash (ip_hash) and least connections (least_conn) three kinds


          In addition, upstream module can also be applied for non-http load balancing, as the following example defines the nginx load balancing purposes as memcached service

upstream memcachesrvs {

    server 192.168.1.200:11211;

    server 192.168.1.201:11211;

}

server {

    location / {

    set $ memcached_key "$ uri $ args?";

    memcached_pass memcachesrvs;

    error_page 404 = @fallback;

    }

    location @fallback {

        proxy_pass http://127.0.0.1:8080;

    }

}

      C, if the judge sentences

          When using the if statement in location can be achieved in conditions of judgment, which usually have a return statement, the amount of rewrite rules which generally has the last marker used together or break, but it can also be in accordance with the need to use a variety of scenarios, it needs attention improper use can cause unpredictable knot

location / {

    if ($ request_method == "PUT") {

        proxy_pass http://upload.magedu.com:8080;

    }

    if ($ request_uri ~ "\ (jpg |. gif | jpeg | png) $") {

        proxy_pass http: // imageservers;

        break;

    }

}

upstream imageservers {

    server 192.168.1.202:80 weight 2;

    server 192.168.1.203:80 weight 3;

}


        a, if statement to determine the conditions


          Regular Expression Match


          ==: Equivalent Comparison


          ~: Return "true", it is determined match the regular expression pattern matching specified or not distinguishing between uppercase and lowercase characters


          ~ *: Return "true", it is determined whether or when matching is not case-sensitive character when a regular expression pattern matches the specified

          ! ~: Return a regular expression pattern does not match the specified "true", it is determined whether or not distinguishing between uppercase and lowercase characters match

          ! ~ *: Return a regular expression pattern does not match the specified "true", it is determined whether or not distinguishing between uppercase and lowercase characters match

          Files and directories matching judgment


          -f, - f:! judged that the specified path exists and is a file


          -d, - d:! determine whether the specified path exists and is a directory


          -e, - e:! judged that the specified path exists, or the file path can be


          -x, - x:! determine the path specified file exists and is executable.


      b, nginx commonly used global variables


          Here is nginx common global variable part, they are usually used for the realization of conditional statements

            1. $ uri: uri current request, with no arguments

            2. $ request_uri: uri request with complete parameters


            3. $ host: http request packet host header, if the request has no host header, this places the processing host name instead of requesting host


            $ 4. hostname: nginx service operation of the host where the hostname


            $ 5. remote_addr: client ip


            $ 6. remote_port: Client port


            7. $ remote_user: user authentication using the user name entered by the user client


            8. $ request_filename: user request URI after the local root or alias conversion mapping a local file path


            9. $ request_method: Request Methods


          $ 10. server_addr: server address


          $ 11. server_name: server name


          $ 12. server_port: server port


          13. $ server_protocol: server protocol client wanted to send a response, such as http / 1.1 http / 1.0

            14. $ scheme: scheme mapping protocol used in the request itself protocol


          15. $ http_HEADER: match request packets specified in the HEADER example: $ http_host Match request header packets host

          16. $ sent_http_HEADER: response packets specified HERADER, for example: $ http_content_type match the corresponding text in the message header content-type


          17. $ document_root: the current request is mapped to the root configuration

Third, the reverse proxy performance optimization

    In a reverse proxy scenario, nginx is used to define a series of designated its operating characteristics, such as buffer size to a reasonable value to specify these settings can improve its energy system


  1, the buffer zone set


      nginx default again in response to the client as possible before in the access response message from the upstream server, it will respond to these messages are temporarily stored in a local and try a one-time response to the client. However, when too much response from the client, nginx will view the local disk storage, which will greatly reduce the nginx performance, therefore, has more available memory in the scene, should be used for temporary storage of these messages buffer transfer large text to a reasonable value


      proxy_buffer_size size: Setting for temporarily storing buffer size from the upstream server each response message


      proxy_buffering on | off: Start cache upstream server response packet, otherwise, if proxy_max_temp_file_size instruction is 0, the response packet from the upstream server will be transmitted synchronously received at the moment to the client; under normal circumstances, enabled proxy_buffering and proxy_max_temp_file_size set to 0 and can avoid caching function response packets, and be able to avoid its cache to disk

      proxy_buffers 8 4k | 8k: buffer for caching server response from the upstream packet size

  2 cache

      nginx as a reverse proxy can be a response from the upstream to the local cache and the contents of the request in a subsequent client also build response packet directly from the local


      proxy_cache zone | off: a user-defined cache shared memory area, which can be called many places, the cache will follow upstream server header response packet on the cache settings, such as "Expires", "Cache-Contol: no- cache "," Cache-Control: max-age = XXX "," private "and" no-store "and so on, but nginx set proxy_cache_key must contain user-specific data such as $ cookir_xxx of ways, but in the end this way in public use the cache may be at risk, therefore, contained in the response packet header or click the specified flag packets will not be cached

Set-Cookie

    Cache-Control containing "no-cache", "no-store", "private", or a "max-age" with a non-numeric or 0 value

    Expires with a time in the past

    X-Accel-Expires: 0

      proxy_cahce_key: set during storage and retrieval cache "button" for string variables can be used as its value, but not the time and they may have the same content caching times; In addition, the user's private key information for prevents the user's private information back to other users


      proxy_cache_lock: enabled, prevents several identical requests in cache misses simultaneously send upstream, its entry into force returns worker level

      proxy_cache_lock_timeout: locking function when proxy_cache_lock long

      proxy_cache_main_uses: number of times a response packet by at least should be requested before the cache


      Shared memory area defining a path for storing cached response message, and save a key and the corresponding cache object metadata: proxy_cache_path (key_zone = name: size), its optional parameters are:


          levels: the length of each stage of the subdirectory name, valid values ​​are 1 or 2, separated by a colon between every level up to level 3


          inactive: inactive cache entries from the cache when the cache before removing the maximum length


          max_size: limit the size of the cache space, when the object exceeds this space to cache, the cache manager will give LRU algorithm to clean up


          loader_files: cache loader (cache_loader) each work process up to the number of files to load metadata


          loader_threashold: cache loader when the maximum length of sleep:

E.g:

proxy_cache_path / data / nginx / cache / one levels = 1 keys_zone = one: 10m;

proxy_cache_path / data / nginx / cache / two levels = 2: 2 keys_zone = two: 100m;

proxy_cache_path / data / nginx / cache / three levels = 1: 1: 2 keys_zone = three: 1000m;


      proxy_cache_usr_stale: In that case unable to contact the upstream server (such as error, timeout or http_500, etc.) so that nginx use the local cached objects directly respond to client requests: The format is:

proxy_cache_use_stale error | timeout | invalid_header | updating | http_500 | http_502 | http_503 | http_504 | http_404 | off


      proxy_cache_valid [code ....] time: long used to set the different markets for different response to the active cache, such as

proxy_cache_valid 200 302 10m;


      proxy_cache_menthods [GET HEAD POST]: Turn caching feature for those who request method

      proxy_cache_bypass string: set in that case, nginx will not cache data: for example,

proxy_cache_bypass $ cookie_nocache $ arg_nocache $ arg_comment;

proxy_cache_bypass $ http_pragma $ http_authorization;

Use Cases

http {

    proxy_cache_path / data / nginx / cache levels = 1: 2 keys_zone = STATIC: 10m

                                        inactive = 24h max_size = 1g;

    server {

        location / {

            proxy_pass http://www.magedu.com;

            proxy_set_header Host $ host;

            proxy_cache STATIC;

            proxy_cache_valid 200 1d;

            proxy_cache_valid 301 302 10m;

            proxy_cache_vaild any 1m;

            proxy_cache_use_stale error timeout invalid_header updating

                                  http_500 http_502 http_503 http_504;

        }

    }

}


  3, compression


  nginx response packets can be used to enable compression before it is sent to the client, which can effectively save the agent. And to improve the response rate to the client. Usually compile nginx by default comes with gzip compression function, therefore, can be enabled between

http {

    gzip on; // Open compression

    gzip_http_version 1.0;

    gzip_comp_level 2; // compression ratio

    gzip_types text / plain text / css application / x-javascript text / xml application / xml application / xml + rss text / javascript application / javascript application / json; // to compress those files

    gzip_disable msie6; // browser that does not compress

}


  gzip_proxied instructions may define the types of objects the client requested compression is enabled, such as "expire" header definition can not be cached objects enable compression, there are other acceptable value "no-cache", "no-store", "private", "no_last_modified", "no_etag" and "auto", etc., and "off" to disable the compression function
     
         
         
         
  More:      
 
- By way of a binary installation innobackupex (Database)
- Binary tree and some basic operations with binary list (Programming)
- JavaScript, some conclusions about the implicit conversion (Programming)
- Preps - Print within the specified range of IP addresses (Linux)
- Oracle TDE transparent data encryption (Database)
- To install the Ubuntu Touch emulator on Ubuntu (Linux)
- Java eight new features 8 (Programming)
- Sublime Text 3 practical functions and shortcut keys used to collect (Linux)
- Ubuntu install Tonido private cloud services (Server)
- Advanced permissions Linux file system settings (Linux)
- Clojure programming languages: take full advantage of the Clojure plug-in Eclipse (Programming)
- Default permissions Linux file and directory permissions and hide - umask, chattr, lsattr, SUID, SGID, SBIT, file (Linux)
- Vim useful plugin: YouCompleteMe (Linux)
- Inxi: Get Linux system and hardware information (Linux)
- How to troubleshoot Windows and Ubuntu dual system time is not synchronized (Linux)
- Linux Creating a new user error Creating mailbox file: File exists (Linux)
- Linux system boot process detail (Linux)
- Use $ BASH ENV variable to mention the right way under Linux (Linux)
- Try debugfs restore the deleted files ext3 file system (Linux)
- Linux platform NTOP Installation and Configuration (Linux)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.