Home PC Games Linux Windows Database Network Programming Server Mobile  
  Home \ Database \ CentOS6 MongoDB connection solution can not break 1000     - Fedora network set up simple (Linux)

- Python Django direct implementation of sql statement (Programming)

- C ++ Supplements - locates the new expression (Programming)

- NGINX Plus now fully supports HTTP / 2 (Server)

- Let your PHP 7 faster (GCC PGO) (Linux)

- On Android running ClojureScript (Linux)

- Linux monitoring tools introduced series --smem (Server)

- Installation and configuration under Linux SVN server - Backup - Recovery - Scheduled Tasks (Server)

- Use OpenSSL to generate a certificate (Linux)

- The Linux OOM Terminator (Server)

- How to query the role of Linux services (Linux)

- 12 kinds of detection of CPU information on a Linux system tools (Linux)

- LNMP Note: Addressing mail function can not send mail (Server)

- Partition and file system under Linux (Linux)

- Spring REST Exception Handling (Programming)

- Construction Spark source and application development environment (Server)

- Workaround CentOS error message during compilation PHP5 common (Linux)

- Management and application Oracle external table (Database)

- Linux security settings Basics (Linux)

- Android child thread really do not update UI (Programming)

  CentOS6 MongoDB connection solution can not break 1000
  Add Date : 2018-11-21      
  Problem Description:

Production environment found CPU running at full capacity, MongoDB number of connections can not always break through 1000.


1, views mongodb log, the following error message:

Wed Nov 21 15:26:09 [initandlisten] pthread_create failed: errno: 11 Resource temporarily unavailable

Wed Nov 21 15:26:09 [initandlisten] can not create new thread, closing connection

2, on a different machine CentOS5 test of the connection 2000 connection no problem at all.
3, find the problem on google, keyword "mongod.conf can not create new thread, closing connection"
4, find the problem, the original centos6 previous centos5 different, more of a default limit nproc user profiles: /etc/security/limits.d/90-nproc.conf, the default is set to the average user nproc 1024, the mongodb just another mongod use the non-root user to run, so the number of connections has been stagnant.
5. Change /etc/security/limits.d/90-nproc.conf, 1024 into the 20480, the problem is solved.

[Root @ test ~] # cat /etc/security/limits.d/90-nproc.conf

# Default limit for number of user's processes to prevent

# Accidental fork bombs.

# See rhbz # 432903 for reasoning.

* Soft nproc 20480

The maximum number of open file handles and process limits the number of users:

In the time following the deployment of Linux applications, sometimes encounter Socket / File: Can not open so many files in question; this value will also affect the maximum number of concurrent server, in fact, there is a Linux file handle limit, but Linux default not very high, usually 1024, with the production server is actually very easy to reach this number. Below that it is how positive solution to correct the system configuration defaults.

See Method

We can use ulimit -a to see all limits

[Root @ test ~] # ulimit -a

core file size (blocks, -c) 0

data seg size (kbytes, -d) unlimited

scheduling priority (-e) 0

file size (blocks, -f) unlimited

pending signals (-i) 256469

max locked memory (kbytes, -l) 64

max memory size (kbytes, -m) unlimited

open files (-n) 64000

pipe size (512 bytes, -p) 8

POSIX message queues (bytes, -q) 819200

real-time priority (-r) 0

stack size (kbytes, -s) 10240

cpu time (seconds, -t) unlimited

max user processes (-u) 65536

virtual memory (kbytes, -v) unlimited

file locks (-x) unlimited

Wherein "open files (-n)" is to limit the number of file handles on Linux operating system for a process to open, the default is 1024.
(Also includes the number of SOCKET opened, the number of concurrent connections to the database may be affected).

The correct approach should be to modify /etc/security/limits.conf
There are very detailed notes, for example,
Hadoop soft nofile 32768
hadoop hard nofile 65536

hadoop soft nproc 32768
hadoop hard nproc 65536

It can be unified into a file handle limit soft 32768, 65536 hard. Profile front refers to domain, is set to an asterisk represents the global, in addition you can also make different restrictions for different users.

Note: This is among the hard limit is the actual limit, while the soft limit is warnning restrictions will only make warning; in fact ulimit command set itself hard and soft points, plus -H is hard, add -S is soft

The default display is the soft limit, if you run the ulimit command to modify the time did not add the words, is to change along with two parameters.

RHE6 and subsequent modifications /etc/security/limits.d/90-nproc.conf in nproc

How to modify the connection limit:

Temporary modifications (changes a user can open the shell of the current number of processes):

# Ulimit -u xxx

Permanent changes, the insurance practice is modified while /etc/security/limits.d/90-nproc.conf and /etc/security/limits.conf as follows:

limits_conf = /etc/security/limits.conf:
* Soft nproc s1
* Hard nproc h1

nproc_conf = /etc/security/limits.d/90-nproc.conf:
* Soft nproc s2
* Hard nproc h2

s1, h1, s2, h2 must be specific number of meaningful at this time shows a value of ulimit -u = min (h1, h2)

So usually set s1 = s2 = h1 = h2, for example, while adding limits_conf and nproc_conf in:
* Soft nproc 65536
* Hard nproc 65536
- Linux Proc File System Experiment (Linux)
- Enterprise Hadoop cluster architecture - Hadoop installation (Server)
- Bootstrap 3.3.5 release download, Web front-end UI framework (Linux)
- OpenGL shadow map (Programming)
- Ubuntu 14.04 and derivative versions of the user install Pantheon Photos 0.1 (Linux)
- Django1.8 return json json string and the string contents of the received post (Programming)
- Oracle table space rename and delete table space (Database)
- After Oracle 11g dataguard failover rebuild the archive logs are not applied to be NO problem (Database)
- Ubuntu install Scala 2.10.x version (Linux)
- Spark local development environment to build (Server)
- MySQL performance comparison of large amounts of data storage (Database)
- How to Install Cantata MPD 1.3.3 for Ubuntu and Derived Version Users (Linux)
- Why learn and use C language (Programming)
- shell script: the number of characters in the text to print no more than 6 words (Programming)
- Mac OS X Server installation and application (Linux)
- Linux loopback adapter Driven Design (Programming)
- 64 Ubuntu 15.04 Linux kernel upgrade to Linux 4.1.0 (Linux)
- Linux install deploy Ansible (Linux)
- To solve the Mac in question invalid BASH under configuration environment variable (Linux)
- Java objects to garbage collection (Programming)
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.