Home PC Games Linux Windows Database Network Programming Server Mobile  
  Home \ Server \ The Linux OOM Terminator     - Using Python multithreaded mistakes summary (Programming)

- Spark and Hadoop comparison (Server)

- libnet list of functions (Programming)

- Ubuntu 14.04 after the restart the default maximum screen brightness solutions (Linux)

- Linux iostat command example explanation (Linux)

- Boost-- time and date - (1) timer library introduction (Programming)

- Ubuntu system cp: omitting directory problem (Linux)

- VMware virtual machine operating system log Error in the RPC receive loop resolve (Linux)

- Android working with Volley Comments (Programming)

- Linux --- process handle limit summary (Linux)

- Java-- get the reflection object information (Programming)

- How to install Eclipse Luna IDE on CentOS 7 / RHEL 7 (Linux)

- Installation of JDK and Tomcat under Linux (CentOS) (Linux)

- The practical application of Oracle synonyms + dblink (Database)

- Use Oracle 11g show spparameter command (Database)

- Linux dd command make U disk boot disk (Linux)

- Linux firewall iptables beginner tutorial (Linux)

- How to upgrade to Ubuntu 14.04 Linux Kernel 4.4.1 LTS (Linux)

- Build and verify MongoDB3.0.7 version (shard + replica) Cluster (Database)

- DB2 commonly used scripting sort out (Database)

  The Linux OOM Terminator
  Add Date : 2018-11-21      
  It is six o'clock in the morning. I was awake in the end is a summary of what makes my alarm to get up early so much. When the story first, ringtones just stopped. Irritable and sleepy and I looked at the phone and see if I was crazy set the alarm so early, actually 5:00. However it is not, but our monitoring systems show, Plumbr service failed.

As seasoned veterans in the field, I opened the coffee machine, which is the first step in the right solution to the problem. After a cup of coffee in hand, now I can start a processing failure. First suspect is the application itself, because it is a little abnormal nor before the crash. Application log no errors, no warnings, nor any suspicious information.

Monitoring systems we deploy the discovery process has been hung up and restart the service. Now that caffeine has been flowing in my blood, and I became confident. Sure enough, 30 minutes later, I found the following information /var/log/kern.log log:

Jun407: 41: 59 plumbr kernel: [70667120.897649] Out of memory: Kill process 29957 (java) score 366or sacrifice child
Jun407: 41: 59 plumbr kernel: [70667120.897701] Killed process 29957 (java) total-vm: 2532680kB, anon-rss: 1416508kB, filers: 0kB
Obviously we were to pit the Linux kernel. You know, Linux there are many evil monsters (also called daemons). These daemons are several core job custody, one of which is still vicious. All modern Linux kernel, there will be a shortage of memory Terminator (Out of memory Killer, OOM Killer) mechanism built in a low-memory conditions, it will kill your process. When this situation is detected, the Terminator will be activated, and then pick out a process to end off. Select the target process using a heuristic algorithm, which calculates the scores of all the processes, and then select the lowest score of the process.

Understanding "Out of memory killer"

By default, Linux kernel will allow process requests more memory than the actual available memory size. This makes sense in the real world, because most processes do not actually use all its allocated memory (Note: do not use the same full-time). And the problem is most similar to the operators. They are sold to users commitment 100Mb bandwidth, which in fact goes far beyond the capacity of their networks. They bet is that users will not actually run at the same time assigned to their download limit. A 10Gb connection can easily carry 100 or more users, where 100 is a simple mathematical operation derived (10G / 100M).

An obvious side effect of this approach is that, if a program is run out of memory to embark on a road of no return how to do. This can lead to low available memory, the memory page that is not able to process the redistribution. You may also come across such a situation, you are not the root account can not kill this stubborn process. To address this situation, the Terminator is activated, and the need to find the end of the process.

About "Out of memory killer" adjust the parameters, you can refer to this article.

Who triggered Out of memory killer?

Although it is already know what happened, but in the end still do not know who triggered the Terminator, and then at five o'clock in the morning woke me up. After further analysis found the answer:

/ Proc / sys / vm / overcommit_memory configuration allows excessive use of memory - the value is set to 1, which means that each malloc () request will be successful.
Applications running on a single instance of EC2 m1.small. EC2 instances is disabled by default of the swap partition.
These two factors also happens to catch up with the sudden traffic spikes our services, eventually leading to applications in order to support these additional users continue to request more memory. Excessive use of memory configuration to allow this process to keep greedy for memory, this memory will eventually trigger insufficient Terminator, it is to fulfill its mission. To kill our program, and then in the middle of the night I wake up.


When I took that time to describe engineer, an engineer thought it was interesting, so wrote a small test case to reproduce this issue. In Linux you can compile and run the following code fragment (I was running on the latest stable version of Ubuntu).

package eu.plumbr.demo;
publicclass OOM {
publicstaticvoid main (String [] args) {
java.util.List l = new java.util.ArrayList ();
for (int i = 10000; i < 100000; i ++) {
try {
l.add (newint [100_000_000]);
} Catch (Throwable t) {
t.printStackTrace ();
Then you'll find the same one Out of memory: Kill process (java) score or sacrifice child information.

Note that you may have to adjust under the swap partition and size of the heap, in my test, I set up a 2G by -Xm2g size of the heap, while using swap memory configuration is as follows:

swapoff -a
dd if = / dev / zero of = swapfile bs = 1024 count = 655360
mkswap swapfile
swapon swapfile


This situation has several solutions. After our case, we just moved to a system memory on the bigger machines (pants off let me look at this?) I have considered activate the swap partition, but I think of consulting engineers on the JVM the GC processes under the swap partition performance is not very satisfactory, so this option to give way.

There are other methods such as OOM killer tuning, or load levels distributed to several small examples, or reduce the memory footprint of the application.
- AppCode developed Mac OS X application or shared library experience summary (Programming)
- Terminal Linux command prints - echo (Linux)
- Linux device driver development small example --LED lights (Programming)
- Different versions of MongoDB achieve master-slave replication (Database)
- Linux security settings (Linux)
- Linux System Getting Started Learning: rename multiple files in Linux (Linux)
- Linux Bash share tips for getting started (Linux)
- How to publish projects to the Jcenter repository using Gradle in Android Studio (Programming)
- Quick paging ROW_NUMBER conducted (Database)
- Oracle 12C RAC on temporary table space Enlighten (Database)
- Ubuntu 14.04 install AMD graphics driver is fully dual monitor solution (Linux)
- Android 4.2 compilation notes (Programming)
- lack of SWAP space during installation of Oracle (Database)
- The principle Httpclient4.4 (HttpClient Interface) (Programming)
- CentOS 7.0 Experience with previous versions (Linux)
- Implement binary search algorithm in C language (Programming)
- MySQL server after an unexpected power outage can not start (Database)
- C + + secondary pointer memory model (pointer array) (Programming)
- Java objects are taking up much space (Programming)
- Oracle through the alarm log view and inspect the main library, physical and snapshot standby database (Database)
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.