|
TCP is a connection-oriented transport layer protocol can provide reliable data transmission services for the application layer. The so-called connection-oriented connection is not on the real meaning, just before sending the data, must first shake hands with each other, meaning that the recipient know you want to send data to it. The UDP is a connectionless oriented transport layer protocol does not provide reliable data transmission. There is a very appropriate analogy: UDP transmission is similar to the letter, the recipient does not know in advance that you want to write to him; while TCP transmission is like a phone call, you must press the answer key to the other side you can talk to him more.
So how to achieve TCP is a connection-oriented and reliable service? ? Before discussing TCP reliable data transfer, let's look at the most simple transport layer service UDP.
1, UDP
Source port number / destination port number: the role of TCP header with the same port number
Header Length: number of bytes in the message segment (header plus data).
Checksum: error detection, for determining when the UDP packets from the source segment to reach the head to move, which if a bit has changed.
Test and how to calculate? ?
It consists of three parts: UDP pseudo header, UDP header, UDP data section.
Among them, the protocol field: TCP is 6, UDP is 17, UDP length is UDP (including UDP header and data portion) of the total length.
First, the UDP pseudo-header added to the front of the UDP, and the UDP header checksum field in the portion is 0, dividing all the bits into 16-bit words
The sum of all 16-bit words, if they carry, the value is greater than 16 bytes carry the lowest portion of the added positions, such as:
1,011,101,101,011,110 1,111,110,011,101,100 + = 1 1,011,100,001,001,010
So to get 1 1,011,100,001,001,011 highest bit 1 to the lowest position 1,011,100,001,001,011
All results will be obtained by adding the word to a 16-bit number, the number is negated and field test
UDP header from we can see, is a very simple UDP transport layer, the application layer is responsible only for data received from the transmitting side, UDP segment encapsulating layer, and then to the lower layer to the receiving end; at the receiving end, UDP receive data from the lower layer and application layer service. In this transfer process, UDP of a basic error detection service, if no error is detected, directly to the application layer; otherwise discarded.
Here we look at TCP provides reliable transport services:
2, TCP
Source port number / destination port number: for multiplexing / demultiplexing of data from or to the upper layer applications. What does that mean? At the application layer process may be many, each process has possibly through the transport layer to send data to or receive data from the Internet via the Internet transport layer. Then when the transport layer receives data from the Internet to be sent to the application layer which process? Or how to know that the data received from the application layer is the application layer in which service? ? In fact, these are realized by identifying the port number. Each application layer network service corresponds to a port number, port number to identify the corresponding service. So the port is bound to the transport layer to the application layer of adhesive.
Number and confirmation number: is used to achieve reliable data transmission services.
Receive Window field: indicates the recipient receives the remaining buffer size, used for flow control.
Header Length field: TCP header option in the presence of a field, that is the length of the TCP header is variable, it is necessary to specify the length of the header.
Options field: for the sender and receiver negotiate the maximum segment size (MSS), or when used in high-speed network environment when using the window regulator. Also defines a timestamp option.
RST, SYN, FIN bits: for the connection setup and teardown.
PSH bit: When the PSH bit is set, indicating that the receiver should the data immediately to the upper layer.
URG bit and urgent pointer: URG bit indicates segments where there is a higher layer entity is transmitting side is set to "urgent" data; the last byte of urgent data from the 16bit field indicates the urgent pointer. When urgent data exists and gives the end of urgent data when, TCP must inform the receiving end of the immediately higher layer entity.
The checksum field: with UDP checksum provides error detection.
TCP how to ensure the reliability of data transmission? ?
(1) before sending the data, three-way handshake to ensure reliable communication with each other and the receiving end. The following terms of a three-way handshake process:
The initial state of the client and server are CLOSED state, the server opens listen monitor client connection into the LISTEN state; then the client sends a SYN packet sequence number j, then the client enters SYN_SENT state; When the server receives a SYN packet, enter SYN_RECV server status and sends a SYN with the ACK, the acknowledgment number is j + 1, the serial number is k; when the client receives the SYN with the ACK, the client enters ESTABLISHED state, the client, it has been Claim can communicate with the server, so the client can send data to the server, and then the client sends a ACK (ACK can include data information) to the server, confirmation number k + 1; ACK before the server receives, three-way handshake has not been completed, although the client can send data to the server, but only contained in the ACK, the server does not send data to the client only after receiving ACK, the server into the state ESTABLISHED state. Since then, the three-way handshake is complete, the client and the server connection has been established, you can send data to each other.
You must be three-way handshake, not just twice or four times?
In fact, the essence of the problem is the Internet of the channel is not reliable, but to transfer data reliably unreliable in this channel, three-way handshake is the smallest of the theoretical value.
If only two handshakes, then when the client sends a SYN packet, it will occur two cases:
Case 1: server receives this SYN and returns ACK, regardless of whether the client has received the ACK, servers are thought to have a connection with the client, so he starts to send data to the client. But if the customer segment is not received ACK, the client will think the server connection is not established, it does not receive data sent from the server, that server discards the data sent, the message sent by the server timed out repeated transmission data, which resulted in a deadlock.
Case 2: the first connection the client sends a request message segment is not lost, but in a network node prolonged detention, and that the connection release delayed until some time after it reaches the server. It would be a long segment failure. But after the failure of the server receives this connection request segment, it is mistaken for a new connection request sent by the client again. So it sends an ACK, but this time the client does not request, so do not ignore this ACK, the server began to send data to the client, and this time, the client again these data are discarded, and the message sent by the server timeout, the repeated transmission data, also produced a deadlock.
(2) through acknowledgment and retransmission mechanisms to ensure the integrity and sequential data delivery
TCP data as unstructured and ordered stream of bytes, so the above mentioned segment of the sequence number is the first byte of the packet byte stream segment numbers and segment the acknowledgment number is host expects to receive from the client the next byte serial number. Let's take an example:
Suppose TCP received from the application layer to the length of 3000 bytes of data, while TCP MSS maximum packet length is 1460, then they would have the data segment, the first segment of data from 0 to 1459 bytes, and the second to 1460 ~ 2919 bytes, the third paragraph of 2920 to 2999 bytes, the three segments of the sequence numbers are 0,1460,2920.
If the server receives the client sent me the first segment from 0 to 1459 bytes, it expects to receive the next byte serial number 1460, then returned to the client to confirm the ACK is No. 1460, and then the server receives the client sent to the 2920 to 2999-byte packets, but did not receive 1460 ~ 2919 bytes, then the server expects to continue to receive the next byte is 1460, so the return of the ACK acknowledgment number is still 1460. TCP only until the first byte before the byte is not received, the TCP provides a cumulative confirmed. Recipient reservations disorder bytes, while waiting for the missing bytes to fill the gap.
Of course, in such a complex network, even if the three-way handshake to establish a connection, and it is impossible to transmit data each can successfully reach the destination. Each time the client sends a packet to the network number, in fact, will continue to cache the packet, the server receives guidance sent, server receives the ACK for the packet before it is discarded. But when the segment transmitted occur in the network packet loss or bit error produced or if the server returns an ACK lost, then the client will not receive any ACK. So how do? You can not always wait for it?
Client through a timer timeout mechanism to ensure that clients do not wait indefinitely. That is, when sending a message segment, it starts a timer when timeout occurs when the server has not received incoming ACK, the client to resend the message segment. But how long will it be set? ? Send a message from a client to receive the ACK equivalent of a round trip, we used to represent the round-trip time RTT, set the timer at least larger than RTT it. If the ACK is lost, then the server if you received this retransmission packet, the data do not repeat it? ? Server to ensure that no redundant data through the serial number, when the server receives this duplicate packets, we will know the client does not receive the ACK timeout, and put it directly discarded, and then returned to the client a new ACK.
(3) TCP provides flow control and congestion control
Flow control is actually a speed matching service, which means that the sender transmits data at a rate to match the rate of the receiving application to read, in order to eliminate the possibility of the receiver buffer overflow. There is a field called the receive window field in the TCP header, which is used to notify the sender buffer size on the remaining server (rwnd) of.
TCP provides congestion control is not network-assisted congestion control, but the end congestion control, because the IP layer does not end systems to provide explicit feedback network congestion. Then the TCP sender how to limit its transmission rate? How do I know if the sender and congestion on the path?
As mentioned above, the timeout when packet loss may occur in the network, and server segments may receive redundant data packets, of course, the client is no exception, can receive redundant ACK. So we put the packet loss event is defined as: either a timeout or receive three redundant from the receiving end of the ACK. When the loss occurred, the client will know that congestion exists on the link.
The sender maintains a congestion window (cwnd), a sender buffer is not seeking to confirm the amount of data does not exceed cwnd and rwnd (receiving window field flow control, the size of the remaining buffer on the server) of the minimum value. This constraint limits the amount of data that the sender has not been confirmed, also indirectly limit the transmission rate.
In fact, TCP is based on the principle that sets transmission rate:
A missing segment means congestion, and therefore should be lost when the rate of TCP segment sender is reduced.
A confirmation message indicating that the network segment being delivered segment sender to the recipient, so that when a packet segment previously unacknowledged confirmation arrives, the sender can increase the rate.
Because the IP layer to the upper layer does not provide explicit feedback network congestion, TCP is to serve as an implicit signal ACK packet loss and bandwidth detection event.
So the question again the value of cwnd how should set up? ?
By TCP congestion control algorithms: slow start, congestion avoidance, fast recovery.
Previous track has been talked about a TCP congestion window (cwnd) provides congestion control services, by adjusting the value of cwnd to control the transmission rate. So how TCP-based packet loss event to set cwnd value? Achieved by TCP congestion control algorithm. TCP congestion control algorithm has three main parts: slow start, congestion avoidance, fast recovery.
1, slow start
When a TCP connection start, the initial value of cwnd is generally set to a small value of the MSS. Because only when the sender receives the ACK, it will update the value of cwnd, so within a time RTT, the sender can send data cwnd bytes in size, which makes the initial transmission rate of approximately MSS / RTT. This is a smaller value, most of the bandwidth is free, how to quickly update cwnd to improve bandwidth utilization? ?
Slow start: the value of cwnd begins with an MSS and whenever the transmission segment increased first recognized a MSS.
Beginning in the first RTT, cwnd only one MSS, send a message;
Receives an ACK, an increase at this time cwnd MSS, in the first two RTT, cwnd is 2 MSS, sending two packets;
Receive two ACK, this time cwnd increase two MSS, so in the first three in the RTT, cwnd is four MSS, sending four packets;
And so on.
From the above process, a difficult to find every RTT, the transmission rate will double. Although the initial rate of only MSS / RTT, but the slow start of the process is exponential growth. Of course, the bandwidth is limited, the transmission rate can not be unlimited growth, then the question is, what is the end of this exponential growth? ?
(1) When there is a loss caused by a timeout event (congestion), TCP sender cwnd to 1 and start slow start; while the slow start threshold ssthresh set to cwnd / 2.
(2) in (1) to re-start the slow process, when the value of cwnd growth greater than or equal to the slow start threshold ssthresh, the slow start ends and transferred to the congestion avoidance mode.
(3) If the slow start of the process, the sender receives three redundant ACK, TCP slow start to the end of the process, the implementation of fast retransmit and fast recovery mode enter.
2, Congestion Avoidance
Mentioned above, start again only when the slow start cwnd growth and greater than or equal to the slow start threshold ssthresh, will enter congestion avoidance mode, so when entering to avoid congestion, the cwnd value is probably in the first encounter congestion half the value of cwnd. At this point if it continues to be slow start will make the value of cwnd doubles which may lead to congestion. Therefore, congestion avoidance mode for each ACK received, not as slow start as a direct increase cwnd MSS bytes, but the increase MSS * MSS / cwnd bytes. That is when MSS = 1460 bytes, cwnd 14,600 bytes, then each time it receives a ACK, cwnd increase 1460 * 1/10 bytes, only 10 received ACK, cwnd will increase a MSS.
The question is, with the same slow start, do cwnd value also has been growing so go on it? ? When will it end?
(1) When there is a loss caused by a timeout event (congestion), with the same slow start, TCP sender cwnd to 1 and start slow start; while ssthresh set to cwnd / 2.
(2) Upon receipt of three redundant ACK, ssthresh set to cwnd / 2, TCP will set the value of cwnd ssthresh + 3, enter the fast recovery mode.
3, Fast Recovery
For causes TCP state to enter the fast recovery of the missing segments, for each redundant ACK received, the cwnd value increases a MSS, when receiving a new ACK TCP cwnd after setting the value of ssthresh the congestion avoidance status.
When in the fast recovery phase-out event, the value of cwnd is set to 1 MSS, and set the value of ssthresh to half of cwnd into slow start.
TCP congestion control is actually additive increase multiplicative decrease (AIMD) congestion control, and the path of the TCP connection when there is no congestion (loss event is determined by), plus the transmission rate of increase; when the packet loss event , the transmission rate multiplicative decrease. We know that UDP is no congestion control, in fact, if a large number of the UDP without any constraint, it is prone to deadlocks network, so that there is little data to be transferred between ends.
4. Summary:
Of course, while UDP is unreliable, transport protocol connection, and TCP-oriented transport layer protocol to provide reliable data transfer connection. But UDP application is still very wide, like DNS, QQ are using UDP, a lot of people wonder why TCP provides so many services, why not use TCP and UDP do so unreliable? ?
In fact, whether TCP or UDP, in such a complex network, may not be entirely reliable.
TCP simply to guarantee its reliability through acknowledgment and retransmission mechanism, UDP does not acknowledgment and retransmission mechanisms.
TCP provides ordered data stream service, UDP each packet individually, the receiving side does not guarantee submitted to the application layer data packet is ordered.
TCP provides flow control and congestion control, through the flow control, you can let the application layer of the transmitting and receiving data read rate matching from the receive buffer, so as not to because the receiving buffer is full and packets are lost; by congestion control, each TCP connection through the congested link can be equally shared link bandwidth.
But just because TCP provides so many services, such TCP become very bloated, difficult to transmit large volumes of data; this time, lightweight UDP that stood out. UDP is very simple, does not provide so many mechanisms and services, making it possible to much faster than TCP UDP transfer rate. Of course, some would say, a very high UDP packet loss rate, the received UDP packet disorder, UDP is not congested, it may cause some problems down the network, and so on.
Because the transport layer and above levels only in end systems are implemented in the network packet switch is implemented only at the network layer, which means that all these services are TCP based end to end service. Since it is the end of the service, then all the problems described above are available through the upper layer (that is, the application layer) is achieved by the application layer of the data sent by the numbers on the receiving end can sort the data received in order to the to order data; the application layer by adding acknowledgment and retransmission, can greatly reduce the packet loss rate; by adding a window in the application layer, to achieve flow control and congestion control purposes. Of course, the specific UDP-based implementation is not required to so TCP services in the application layer, otherwise it might as well use TCP. We just need to realize that we need to service and relationships, for example, we need to reduce the dropout rate, we will only achieve retransmission mechanism. (Of course, we can not completely transport layer and application layer to communicate directly through the network layer)
UDP is free, can allow the top to achieve; TCP is very comprehensive, top to improve the reliability of data transmission and a variety of mechanisms. In the end choose which transport protocol, in fact, also selected according to the specific application. The presence of both the value and the key is to weigh the various aspects of it. |
|
|
|