Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Linux \ Linux kernel to achieve soft RPS network to receive soft interrupt load balancing to distribute     - PyCharm new Python file name and the name of the module will import the same problem might arise (Programming)

- Linux disk quota-related instruction (Linux)

- Linux system performance and usage activity monitoring tools -Sysstat (Linux)

- Linux protobuf-c (Linux)

- Ubuntu 14.04 after the restart the default maximum screen brightness solutions (Linux)

- CentOS 6.5 Linux System Customization and Packaging Quick Implementation Script (Linux)

- Java concurrent programming combat (using synchronized synchronization method) (Programming)

- Linux itself disguised illusion strengthen security (Linux)

- Tip: Use Cryptsetup U disk encryption (Linux)

- Linux server security - the web server configuration (Linux)

- Linux server network penetration testing (Linux)

- Ubuntu derivative version of the user and how to install SmartGit / HG 6.0.0 (Linux)

- MySQL In can not be overridden with an internal connection (Database)

- CentOS 6.6 running level (Linux)

- Learning C ++ Standard Template Library and data structures (Programming)

- Linux operating process information (Linux)

- The need to avoid a gap of InnoDB lock (Database)

- Linux / CentOS 7.0 installation and configuration under Tomcat 8.0 (Server)

- Linux System Getting Started Learning: Linux how to install 7zip (Linux)

- CentOS install SVN server configuration and automatically synchronized to the Web directory (Server)

 
         
  Linux kernel to achieve soft RPS network to receive soft interrupt load balancing to distribute
     
  Add Date : 2018-11-21      
         
         
         
  Soft interrupt routine Linux distribution mechanism and interrupt problems Linux is divided into two halves, in general (indeed, too), the interrupted CPU to execute the interrupt handler, and in this CPU trigger a soft interrupt (under half), and other hard interrupt processing is returned, then open the interrupt soft interrupt this CPU is running, or wake up the CPU interrupt this soft kernel thread to handle the hard interrupt soft interrupt pending.

In other words, Linux, and an interrupt vector associated with the upper and interrupt soft interrupt are executed on the same CPU, and this can be seen through this interface raise_softirq. The logic design is correct, but in some less intelligent hardware requirements, it does not work well. Kernel is no way to control the soft interrupt distribution and, therefore, can only transmit hard interrupt unchecked. This is divided into two categories:

1. Only the hardware interrupt a CPU according to this logic, if the system has more than one CPU core, then only by a soft interrupt the CPU, which will obviously cause the system load is not balanced between the various CPU.

2. Hardware interrupt blind randomized multiple CPU Note "blind" is used. This is related to the motherboard and the bus, and the interrupt source does not matter much. Therefore, specific business logic and CPU interrupts which interrupt source has no association, such as the motherboard is not ignored and interrupt controller card packet contents, but does not interrupt the CPU based on different meta-information packet ... That is, the interrupt source something to which it can control the CPU interrupt little. Why is the interrupt source must do? It is therefore only know their own business logic, which in turn is an end to end design problems.

Therefore, Linux on soft interrupt scheduling, may lack a little control logic, too little flexibility, is entirely relying on hardware interrupts the CPU interrupt source, and in this regard, since the hardware interrupt source is interrupt controller and bus CPU isolated, with between them is not good. Therefore, we need to add a layer of soft interrupt scheduling to solve this problem.

Is not a universal solution for the above problem described in this article, because it is only for the network packet processing, and the RPS at the beginning of the design by google people, its design is highly customizable, purpose is very simple, is to increase performance Linux server. And I, this idea ported to Linux to improve the performance of the router.

Based RPS software interrupt distribution optimization forwarding optimization article in Linux "Linux forwarding performance evaluation and optimization (forwarding bottleneck analysis and solutions)," I tried a card to receive soft interrupt load balancing to distribute, then tried the soft interrupts are divided into upper and lower halves again:

Upper half: skb for distribution between different CPU.

Lower half: The actual user skb protocol stack receiving process.

In fact, the use of Linux 2.6.35 after joining RPS thought might be a better approach, there is no need to re-partition the network to receive soft interrupt. It is based on the following facts:

Fact 1: NIC very high-end of the case if the card is very high, then it must support hardware multi-queue feature and multi-interrupt vector, so, you can directly bind a queue interrupt to a CPU core, without soft interrupt redistribution skb.

Fact 2: If the card is very low in case the card is very low, for example, it does not support multi-queue and does not support multiple interrupt vector, and can not interrupt load balancing, so there is no need to let the soft interrupt distribution, direct distribution to drive inside not better (in fact, doing really bad)? In fact, even support single interrupt vector between the CPU load balancing, the best have to ban it, because it will destroy the CPU cache affinity.

Why not take advantage of the above two facts can not be interrupted in the complex time-consuming operation, not by a complex calculation. Interrupt handlers are device-dependent, is generally not responsible for the framework, but by the drivers themselves. Stack the main frame only to maintain a set of interfaces, and the driver can call the API interface set. Can you guarantee the correct driver writers can use RPS and not misuse it anyway?

The correct approach is to hide all this mechanism, external only provides a set of configuration, you (driver writers) can open it, close it, as to how it works, you do not care.

Therefore, the final plan still with me first, like it seems RPS is such a train of thought. Modify the path NAPI poll soft interrupt callback! However, poll-driven callback is maintained, and therefore a hook HOOK on the common path of the packet data, to be responsible for RPS treatment.

Why cut off the low-end CPU interrupt load balancing NIC answer seems simple, the answer is: because of our own software can do better! And based on simple hardware simple and stupid blind interrupt load balancing may be (almost certainly) be self-defeating!

Why is that? Because of the simple low-end network adapter hardware does not recognize the network traffic that it can only recognize that this is a data packet, but does not recognize the tuple information packet. If the first packet of a data stream is distributed to the CPU1, and the second data packets distributed to the CPU2, then the flow of public data, such as recorded nf_conntrack things, the utilization of CPU cache will be relatively low, cache jitter would be more severe. For TCP stream, it may also be delayed because the TCP packet serial parallel processing uncertainty leads packets out of order. Therefore, the most direct idea is to belong to a stream all packets distributed on a CPU.

I modify the native code to know RPS, RPS characteristics Linux is google staff introduced, their goal is to enhance the efficiency of the server. So they focused consider the following information:

Which CPU to provide services for the data stream;

Which CPU is received by the stream data packet NIC interrupted;

Which CPU to run the process stream packet soft interrupt.

Ideally, in order to achieve efficient use of CPU cache, the CPU should be the top three with a CPU. The primary purpose of this is to achieve RPS. Of course, for this purpose, the kernel has to maintain a "flow table", which records the above three types of CPU information. The flow table is not really based on tuple flow table, but simply record table above CPU information.

And my needs are different, I focused on data forwarding instead of local processing. So my focus is to look at:

Which CPU is received by the stream data packet NIC interrupted;

Which CPU to run the process stream packet soft interrupt.

In fact, I do not fancy CPU scheduler which sends packets, sending thread from VOQ scheduling just one skb, and then send it does not process the packet, not even to access the contents of the packet (including the protocol headers), so the utilization rate of the cache is not sending thread priority.

Therefore, with respect to the Linux server as a concern for the packet stream which CPU is located to provide services, Linux as a data transmission which CPU logic router can be ignored (although it can also be set by the secondary cache Relay [last talk] to optimize the point). Linux as a router, all data must be fast, some as simple as possible, because it does not run Linux as a server when the server processes inherent delay - to query the database, business logic processing, and this inherent delay service processing network processing processing delay relative , it is much larger, so as the server, the network protocol stack processing is not a bottleneck. What server? Server is the end of the packet, in this, the protocol stack is only one entrance, an infrastructure.

When operating as a router, network protocol stack processing delay is only a delay, so to optimize it! What router? Router is not the end of the packet, the router is a packet had to go through, but you want to leave the place as quickly as possible!

So I did not directly native practices RPS, but the hash calculation is simplified and no longer maintains any state information, just calculate a hash:

[Plain] view plaincopyprint?

target_cpu = my_hash (source_ip, destination_ip, l4proto, sport, dport)% NR_CPU;

target_cpu = my_hash (source_ip, destination_ip, l4proto, sport, dport)% NR_CPU; [my_hash as long as the information is hashed to evenly enough! ]

Nothing more. So get_rps_cpu the above can only be one.

There is a complexity to consider, if you receive an IP fragmentation, and not the first, then take less than four information because they might titles and distributed to different CPU processing at the IP layer needs to be reorganized and they will involve the exchange of visits between the CPU and the data synchronization problem, the problem being to consider.

NET RX soft interrupt load balancing overall framework This section gives a general framework, very low-end card, the following assumptions:

It does not support multi-queue;

It does not support interrupt load balancing;

Only interrupt CPU0.

CPU affinity optimization Relay mention that this section a little something about the output processing threads, since the output logic processing thread is relatively simple, is to perform scheduling policy and then have a network card to send skb, it does not often touch the packet (Please note that due to the VOQ, packets into the VOQ when its Layer 2 information had a good package, and some can scatter / gather IO way, if not support, the only memcpy ...), so the CPU cache it significance has not received a large stack processing threads. In any case, however, it is to touch this skb once, in order to send it, and it would also like to touch input card or their VOQ, so the CPU cache if affinity with, is bound to be better.

In order to prevent a separate processing pipeline is too long, resulting in increased latency, I tend to output logic in a separate thread, if the CPU core enough, I tend to be tied to a core, it is best not to tie in the core and input process on the same. Then strapped on what is good or what?

I tend to share two or three cache CPU cache of two core network are responsible for receiving and processing network to send scheduling process. This creates a local input and output relay. In accordance with the board and the general structure of the CPU core package

Why I do not analyze the code first, based on the fact that I did not fully use the RPS native implementation, but it made some amendments, I do not have a complicated hash calculation, I relaxed some restrictions aimed at It makes computing faster, stateless something no maintenance!

Secondly, I find that I do not understand gradually I used to write code analysis, and while it is difficult to understand large Pipi code analysis of the book, I find it difficult to find the corresponding versions and patches, but the basic idea is exactly the same . So I tend to sort out the flow of events to be processed, rather than simply to analyze the code.

Disclaimer: This article is at the bottom of the final compensation for general equipment, if there is a combination of hardware solutions, natural approach to ignore this article.
     
         
         
         
  More:      
 
- Use Bash script write CVS version control (Server)
- 22 Port weak passwords and SSH connection program of the Linux server (Linux)
- Generic mechanism C11 standard (Programming)
- Linux Getting Started tutorial: hard disk partition and to deal with traps (Linux)
- Ubuntu U disk do not have write privileges can only read but not write (Linux)
- Linux basic introductory tutorial ---- regex basis (Linux)
- IntelliJ IDEA run in Mac10.9 and JDK7 environment (Linux)
- PHP loop reference caused strange problems (Programming)
- Quota for Vsftpd do use disk quotas (Server)
- C ++ complex class of operator overloading (Programming)
- MySQL in order by inaccurate results in problems and solutions (Database)
- Installation and Configuration Tomcat environment CentOS 6.6 (Server)
- Orabbix binding Python send graphical reports (Linux)
- RabbitMQ installation, configuration, monitoring (Linux)
- Use Oracle 11g show spparameter command (Database)
- CentOS 6.5 opens the Xmanager Remote Desktop login (Linux)
- Debian GNU / Linux service list acquisition, shutting down services or run (Linux)
- OpenNMS compile under Linux (Server)
- CentOS6 5 Configure SSH password Free (Linux)
- How to install Nginx on FreeBSD 10.2 as an Apache reverse proxy (Server)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.