There are often small partner asked, MySQL partition (partition) how? Can not? Is there a lot of bug? I do not know why MySQL would partition the general impression the public. Inside monarch but the impression that the greatest impact of the partition bug encountered the most problems on the following example (in the strict sense can hardly be said to be bug), but also small partners Advisory Inside Jun partition. But at least, this bug has been fixed in version 5.7 (5.7 is accurate support Native Paritition). It seems one more reason to upgrade to 5.7. In summary, the production environment is necessary every day to get hold of the partition incorrect?
A month ago, Scott and his colleagues found that the company has a MySQL MHA cluster master (assume master machine called hostA) once every other week or so will hang (hang means MySQL), within a few weeks, MHA cut back and forth several times.
According to international practice, Scott following order to investigate the problem in the end went wrong:
1. First turn MySQL error log, and found no abnormalities
2. The ability to turn Linux system log files, and sure enough, turn to the following:
Nov 26 13:05:38 hostA kernel: mysql invoked oom-killer: gfp_mask = 0x280da, order = 0, oom_adj = 0, oom_score_adj = 0
Nov 26 13:05:38 hostA kernel: Out of memory: Kill process 32271 (mysqld) score 976 or sacrifice child
Nov 26 13:05:38 hostA kernel: Killed process 32271, UID 496, (mysqld) total-vm: 83064212kB, anon-rss: 64204132kB, file-rss: 4544kB
The machine's physical memory size is 62G, from the top of the log to see, MySQL has indeed put it in full. MySQL on the machine's innodb_buffer_pool = 31G, Scott believes it has been quite conservative, but also various buffer_size we use the default values, the user MySQL OOM when the number of connections is 100+,
These visual would have no problems, but actually happened OOM, it is incredible. Felt is not enough memory to use the chant, did not find out the specific reasons, and later added a 62G memory 125G (innodb_buffer_pool_size increased to 64G, this value is indeed very conservative), it took place OOM.
In fact, Scott found a start, there is a problem earlier on this machine, because the system is not enough to cause the maximum number of open file backup xtrabackup the machine always unsuccessful, specifically what is the reason to organize xtrabackup Please wait Scott backup a more detailed process. Then I went to check the machine above * .ibd * .frm files and number of files, startled me (saying Inside monarch also scared the urine):
. [UserA @ hostA mysql] $ sudo find -name '* .ibd' | wc -l
. [UserA @ hostA mysql] $ sudo find -name '* .frm' | wc -l
That is, even with the machine above 170 000 ibd file, but only the 2534 table, apparently the number of partitions in the partition table very much.
[UserA @ hostA mysql] $ sudo find -name '* par *' |. Wc -l
Scott carefully compare the different machine and other machines without problems and found too many machine above the partition is the only one different, this is no way to let Scott do not doubt that the problem is caused by the partition.
Scott is still in accordance with international practice, the first time to check MySQL 5.6's official documentation, to no avail. . . (Official documents, although not a panacea, but still is the emergence of the first reference problem). Go to the MySQL bugs page Search About partition of the bug, to no avail. . . Go google the next, I found that some relatively complex site and wrote the number of partitions MySQL memory exhaustion caused by too many problems, but about the contents of the article did not feel quite right.
The above is the MySQL development team to write articles about InnoDB Native Partitioning. Probably about the contents of the article, in which 5.6, partitions in MySQL Server layer is maintained (in the .par file inside), InnoDB engine layer is not aware of the concept of zoning, InnoDB engine layers to each partition as an ordinary InnoDB tables. When you open a partition table, it will open a lot of partitions, open the partition table is equivalent to open an equal number of InnoDB tables, which requires more memory to store the metadata InnoDB tables and various and various related ibd file open cache and handler information. Inside the 5.7, InnoDB introduced Native Partitioning, its zoning information from the Server layer moved to the InnoDB layer to open a partition table and open a memory overhead of InnoDB table is basically the same.
If we compare the amount of memory used when opening a single instance of this table, first using the old generic non-native partitioning, and then with InnoDB Native Partitioning we see the following:
One open instance of the table takes 49% less memory (111MB vs 218MB) with the current state of Native Partitioning support. With ten open instances of the table, we take up 90% less memory (113MB vs 1166MB)!
Since the upgrade to 5.7 will take some time, has been to reduce the number of partitions to 25000,125G remaining memory of 20 days has been stable at around 20G, which also shows the number of partitions is really too much reason.