Yesterday, the company micro-channel group, CTO shared this message, operation and maintenance for the future availability of TCP-based back-end business the addition of a new option, it is great! All along, Nginx does not support tcp protocol, so the background of some of the TCP-based services only through high-availability load other software to complete, such as Haproxy.
nginx-1.9.0 has been released, this version adds modules for general stream of TCP proxy and load balancing.
The ngx_stream_core_module module is available since version 1.9.0. This module is not built by default, it should be enabled with the --with-stream configuration parameter.
ngx_stream_core_module 1.90 version of this module will be enabled. But not installed by default, if you have compiled to activate this module by specifying the --with-stream parameters. Other improvements include: Change: remove obsolete aio and rtsig event handler method Feature: You can use "zone" in the upstream block instruction Feature: flow module supports TCP proxy and load balancing Feature: ngx_http_memcached_module support byte range Feature: Windows version supports the use of shared memory with the address space layout randomization Feature:. "error_log" instruction is available in mail and server level Bugfix: the "proxy_protocol" parameter of the "listen" directive did not work if not specified in the first "listen" directive for a listen socket .
compiler installation: slightly last posted about a simple configuration demo official of sharing stream module: http://nginx.org/en/docs/stream/ngx_stream_core_module.html
worker_processes auto;
error_log /var/log/nginx/error.log info; events { worker_connections 1024; }
stream { upstream backend { hash $ remote_addr consistent; server backend1.example. com: 12345 weight = 5; server 127.0.0.1:12345 max_fails = 3 fail_timeout = 30s; server unix: / tmp / backend3; }
server { listen 12345; proxy_connect_timeout 1s; proxy_timeout 3s; proxy_pass backend; }
server { listen [:: 1]: 12345; proxy_pass unix: / tmp / stream. socket; } }
"" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" " "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" "" " here I made a tcp reverse lookup little experiment
Background: 125.208.14.177: 3306 Database 1 125.208.14.177:3306 Database 2 218.78.186.162 nginx server
Profiles
worker_processes auto;
error_log /var/log/nginx/error.log info; events { worker_connections 1024; }
stream { upstream backend { hash $ remote_addr consistent; server 125.208.14.177: 3306 weight = 5 max_fails = 3 fail_timeout = 30s; server 125.208.14.177:3307 weight = 4 max_fails = 3 fail_timeout = 30s; }
server { listen 12345; proxy_connect_timeout 1s; proxy_timeout 3s; proxy_pass backend; }
}
Testing:
[root @ iZ236mlq2naZ ~] # mysql -uroot -p '******' -P12345 -h218.78.186.162 -e "select * from test" test
Warning:. Using a password on the command line interface can be insecure + ---------------------------- - + | t | + ----------- ------------------ + | this is 125.208.14.177:3306 | + ------------- ---------------- + [root @ iZ236mlq2naZ ~] # mysql -uroot -p '*****' -P12345 -h218.78.186.162 -e "select * from test" test Warning:. Using a password on the command line interface can be insecure ^ [[A + ----------------- ------------ + | t | + ----------------------------- + | this is 125.208.14.177:3307 | + - --------------------------- + [root @ iZ236mlq2naZ ~] # mysql -uroot -p '***** '-P12345 -h218.78.186.162 -e "select * from test" test Warning:. Using a password on the command line interface can be insecure + --------- -------------------- + | t | + ----------------------------- + | this is 125.208.14.177:3306 | + ----------------------------- + [root @ iZ236mlq2naZ ~] # mysql -uroot -p '******' -P12345 -h218.78.186.162 -e "select * from test" test Warning:. Using a password on the command line interface can be insecure + ----------------------------- + | t | + ----------------------------- + | this is 125.208.14.177:3306 | + ----------------------------- +
to do a read-write separate experiments: Profiles
worker_processes auto;
error_log /var/log/nginx/error.log info; events { worker_connections 1024; }
stream { upstream readdb { hash $ remote_addr consistent; --- as read library server 125.208.14.177:3306 weight = 5 max_fails = 3 fail_timeout = 30s; server 125.208.14.177:3307 weight = 4 max_fails = 3 fail_timeout = 30s; }
server { listen 12345; proxy_connect_timeout 1s; proxy_timeout 3s; proxy_pass readdb; }
upstream writedb { hash $ remote_addr consistent; server 125.208.14.177:3308 max_fails = 3 fail_timeout = 30s; --- as write library } server { listen 23456; proxy_connect_timeout 1s; proxy_timeout 3s; proxy_pass writedb; } } ~
Personal feeling: This is more of a plurality of ports tcp proxy only, when read with a port, use the time to write a port, more trouble ah, used to read and write separation, or a point of difference, load balancing is good , or use the atlas to read and write the true sense of the separation bar.
Finally, you can write http load and load tcp together to achieve multiple purposes. |