CS/컴퓨터네트워크

[Ch3] TCP congestion control

호프 2023. 11. 27. 01:05

Principles of Congestion Control

Congestion

Congestion

  • occurs when too many sources (sending hosts) sending too much data too fast for network to handle
  • queuing delays, lost packets in router's output buffers

Causes and cost of congestion at router buffer in the network

  1. Link Bandwidth -> delay
  2. Finite output buffer at routers -> delay, loss
  3. Multi-hop -> more work to achive the same throughput, unneeded retransmission, upstream resources wasted
    • unneeded retx: retx of pkt which will be dropped
    • if packet dropped due to congestion, upstream transmission capacity used for that packet was wasted

 

Approaches towards congestion control

Network-assisted congestion control

  • routers provide feedback to end systems
    • single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM)

End-to-end congestion control

  • no explicit feedback from network
  • congestion inferred from end-system observed loss, delay
  • approach taken by TCP -> TCP congestion control은 중간 라우터의 도움 없이 발생

TCP Congestion Control

Explicit Congestion Notification (ECN)

Network-assisted congestion control

  • ECN bit (2-bits) in IP header marked by network router to indicate congestion
  • Receiver TCP sets ECE (ECN-Echo) flag bit on ACK segment to echo back the congestion indication to the sender
  • 사용하는 라우터 거의 없음

 

TCP Congestion control: e2e congestion control

Overview

  • When a segment is lost due to timeout or 3 dup ACK -> lost segment implies congestion
    • 👉 decrease TCP sender's rate (i.e. congestion window size == cwnd)
  • Acknowledged segment indicates that the network is delivering the sender's segments to the receiver
    • When an ACK arrives for a previously unacknowledged segment -> network is not congested
    • 👉 increase TCP sender's rate

GOAL: TCP senders don't congest the network but, at the same time make use of all the available bandwidth

TCP sender increases its sending rate to probe for the rate and at which congestion begins, backs off from that rate, and then begins probing again to see if the congestion rate has changed

 

TCP sending rate = cwnd / RTT bytes/sec

  • cwnd is dynamic
  • send cwnd bytes, wait RTT for ACKs, then send more bytes

 

TCP Slow Start (SS)

Slow Start (SS)

  • when connection begins, increase rate exponentially until first loss eventf
    • initially cwnd = 1 MSS
    • double cwnd every RTT by incrementing cwnd by 1 MSS for every ACK received
  • initial rate is slow but ramps up exponentially fast

ssthresh

  • threshold value of cwnd to slow down speed of increment
  • when connection starts, ssthresh is set to a default value determined by OS
  • if loss occurs, then ssthresh = cwnd / 2
  • when cwnd reaches ssthresh -> Congestion Avoidance (CA) starts
    • CA starts: TCP sender increments cwnd by 1 MSS in every RTT (linearly increase)
if (cwnd < ssthresh):
    SS phase (double cwnd in every RTT)
else:
    CA phase (increment cwnd by 1 MSS in every RTT)

 

TCP: detecting, reacting to loss

TCP RENO

  • loss indicated by timeout -> enter SS state
    • cwnd = 1 MSS and grows exponentially
    • ssthresh = cwnd / 2
    • when cwnd reaches to ssthresh, enter CA state
  • loss indicated by 3 dup ACKs -> enter CA state
    • cwnd = cwnd / 2 and grows linearly
    • ssthresh = cwnd / 2
    • dup Acks indicate network capable of delivering some segments

TCP Tahoe

  • loss by timeout or 3 dup ACKs -> enter SS state
    • ssthresh = cwnd / 2

Timeout이 발생한 경우가 3 dup ACK 보다 네트워크 상황이 더 나쁜 경우이다.

loss detected by 3 dup ACKs

 

TCP Congestion Control: AIMD

Addictive Increase Multiplicative Decrease (AIMD)

  • sender increases transmission rate(window size), probing for usable BW until loss occurs
  • AI(addictive increase): increase cwnd by 1 MSS every RTT until loss detected
  • MD(multiplicative decrease): cut cwnd in half after loss

 

TCP Throughput

TCP Throughput ∝ f(W, 1/RTT, 1/L)

  • TCP throughput은 window size W에 비례하고, RTT와 segment loss probability L에 반비례한다.

Avg. TCP throughput = (1.22 * MSS) / (RTT * √L)

 

TCP Fairness

Fairness goal

  • if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K
  • Each TCP connection is fair only when RTT equals

 

 

Unfair becuase of UDP

  • TCP와 UDP가 bottleneck link 공유하는 경우
    • TCP는 congestion control 위해 전송률 ⬇️, UDP는 L5에서 내려보내는 속도 그대로 전송
    • UDP user가 TCP user보다 해당 링크 대역폭을 더 많이 사용하게 되는 unfiarness 발생 가능

Unfair allocation of R due to parallel TCP connections

  • non-persistent with parallel로 여러개의 TCP session을 동시에 여는 application이 persistent TCP connection application보다 어느 시점에 더 많은 BW를 차지하게 되는 unfairness 발생 가능