Production environment found CPU running at full capacity, MongoDB number of connections can not always break through 1000.
1, views mongodb log, the following error message:
Wed Nov 21 15:26:09 [initandlisten] pthread_create failed: errno: 11 Resource temporarily unavailable
Wed Nov 21 15:26:09 [initandlisten] can not create new thread, closing connection
2, on a different machine CentOS5 test of the connection 2000 connection no problem at all.
3, find the problem on google, keyword "mongod.conf can not create new thread, closing connection"
4, find the problem, the original centos6 previous centos5 different, more of a default limit nproc user profiles: /etc/security/limits.d/90-nproc.conf, the default is set to the average user nproc 1024, the mongodb just another mongod use the non-root user to run, so the number of connections has been stagnant.
5. Change /etc/security/limits.d/90-nproc.conf, 1024 into the 20480, the problem is solved.
[Root @ test ~] # cat /etc/security/limits.d/90-nproc.conf
# Default limit for number of user's processes to prevent
# Accidental fork bombs.
# See rhbz # 432903 for reasoning.
* Soft nproc 20480
The maximum number of open file handles and process limits the number of users:
In the time following the deployment of Linux applications, sometimes encounter Socket / File: Can not open so many files in question; this value will also affect the maximum number of concurrent server, in fact, there is a Linux file handle limit, but Linux default not very high, usually 1024, with the production server is actually very easy to reach this number. Below that it is how positive solution to correct the system configuration defaults.
We can use ulimit -a to see all limits
[Root @ test ~] # ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 256469
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 64000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Wherein "open files (-n)" is to limit the number of file handles on Linux operating system for a process to open, the default is 1024.
(Also includes the number of SOCKET opened, the number of concurrent connections to the database may be affected).
The correct approach should be to modify /etc/security/limits.conf
There are very detailed notes, for example,
Hadoop soft nofile 32768
hadoop hard nofile 65536
hadoop soft nproc 32768
hadoop hard nproc 65536
It can be unified into a file handle limit soft 32768, 65536 hard. Profile front refers to domain, is set to an asterisk represents the global, in addition you can also make different restrictions for different users.
Note: This is among the hard limit is the actual limit, while the soft limit is warnning restrictions will only make warning; in fact ulimit command set itself hard and soft points, plus -H is hard, add -S is soft
The default display is the soft limit, if you run the ulimit command to modify the time did not add the words, is to change along with two parameters.
RHE6 and subsequent modifications /etc/security/limits.d/90-nproc.conf in nproc
How to modify the connection limit:
Temporary modifications (changes a user can open the shell of the current number of processes):
# Ulimit -u xxx
Permanent changes, the insurance practice is modified while /etc/security/limits.d/90-nproc.conf and /etc/security/limits.conf as follows:
limits_conf = /etc/security/limits.conf:
* Soft nproc s1
* Hard nproc h1
nproc_conf = /etc/security/limits.d/90-nproc.conf:
* Soft nproc s2
* Hard nproc h2
s1, h1, s2, h2 must be specific number of meaningful at this time shows a value of ulimit -u = min (h1, h2)
So usually set s1 = s2 = h1 = h2, for example, while adding limits_conf and nproc_conf in:
* Soft nproc 65536
* Hard nproc 65536