Wednesday, 12 December 2012

CONGESTION CONTROL


Congestion: When too many packets are sent to a subnet more than its capacity, the Situation that arises is called   congestion.

Reasons for Congestion:

  1. If input packets coming   from 3 or 4 lines, requires   only   one particular output   line.
  2. If routers are supplied   with infinite   amount of memory, packets take longtime to reach to the front of queue where duplicates are generated as they are timed out.
  3. Slow processors cause congestion.
  4. Low bandwidth lines also cause congestion.
  5. Congestion feeds upon itself and cause congestion.
Congestion Control Algorithms: 

These algorithms control congestion. These are mainly divided into two groups:

  1. Open Loop Solutions.                                     2. Closed Loop Solutions.
 Open Loop Solutions attempt to solve the problems by good design to make sure it does not occur in the first place. Once the system is up and running, mid course corrections are not made.
 Closed Loop Solutions are based on the concepts of a feedback loop. It has 3 parts.
Ø  Monitor the system to detest when and where congestion occurs.
Ø  Pass this information to places where action can be taken.
Ø  Adjust system operation to correct the problem.

These closed loop algorithms are further divided into two categories:
  • Implicit feedback:   The source reduces the congestion existence by making local observations.
  • Explicit feed back:   Packets are sent back from the point of congestion to warn source
Open Loop Solutions:
Ø  Congestion Prevention Policies
Ø  Traffic Shaping
Ø  Flow Specifications

1. Congestion prevention policies:

Congestion is prevented using appropriate policies at various levels.

Layer

Policies

Transport

  1. Retransmission policy
  2. Out-of-order caching policy
  3. Acknowledgement policy
  4. Flow control policy
  5. Timeout Determination

Network

  1. Virtual circuits versus data gram inside the subnet
  2. Packet queuing service policy
  3. Packet discard policy
  4. Routing algorithm
  5. Packet lifetime Management

Data Link

  1. Retransmission policy
  2. Out-of-order catching policy
  3. Acknowledgement policy
  4. Flow control policy

Retransmission policy:   Deals with how fast a sender times out and what it transmits upon time out.
Out-of–order Catching policy:If receivers routinely discard all out-of-order packets, (packets arrived without order) they have to be retransmitted.
Acknowledgement policy: If each packet is acknowledged immediately, acknowledged packets generate extra traffic.This policy deals with piggybacking.
Flow Control policy: A tight flow control scheme (ex: a small window) reduces the data rate and thus helps fight congestion.
Timeout Determination:   It is harder as transit time across the network is less predictable than transit time over a wire between two routers.
Virtual Circuits vs Data grams: This affects congestion as many algorithms work only with virtual circuits.
Packets queuing and Service policy:  Relates to whether routers have one queue per input line, one queue per output line or both.
Packet Discard policy:    Tells which packet is dropped when there is no space.
Routing Algorithm:   With this, Traffic is spreaded over all the lines.
Packet lifetime management: Deals with how long a packet may live before being discarded.

 2.  Traffic Shaping: 
It is the process of forcing the packets to be transmitted at a more predictable rate. This approach is widely used in ATM Networks to manage congestion. When a virtual circuit is set up, the user and the subnet agree on a certain traffic pattern for that circuit. Monitoring a traffic flow based on agreement made is called Traffic Policing.

Traffic shaping can be implemented with any of the two techniques:
 ·         Leaky Bucket Algorithm                                 ·         Token Bucket Algorithm

The Leaky Bucket Algorithm:   

Imagine a bucket with a small hole in the bottom. No matter, at what rate water enters the bucket, the outflow is at a constant rate, ‘p’, when there is any water in the bucket and ‘r’, when the bucket is empty. Also, once the bucket is full, any additional water entering it spills over the sides and is lost. The same idea can be applied to packets.

 
               Conceptually, each host is connected to the network by an interface containing a leaky bucket, i.e., a finite internal queue. If a packet arrives at the queue when it is full, it is discarded. In other words, if one or more processes with in the host try to send a packet when the maximum number are already queued, the new packet is unceremoniously discarded. This arrangement can be built into the h/w inter face or simulated by the host operating system. It was first proposed by TURNER and is called the “ LEAKY BUCKET ALGORITHM ”.
                                              
                                                                                                                                                                                                                                                            
               The host is allowed to put one packet per clock tick onto the network, which turns an uneven flow of packets from the user processes inside the host into an even flow of packets onto the network, smoothing out bursts and greatly reducing the chances of congestion.

The Token Bucket Algorithm: The algorithm that allows the output to speedup when large bursts arrive and one that never loses data is the TOKEN BUCKET ALGORITHM. In this algorithm, the leaky bucket holds tokens, generated by a clock at the rate of one token every DT sec. This algorithm allows to save up permission by hosts, up to the maximum size of the bucket, ‘n’ i.e., bursts of up to ‘n’ packets can be sent at once, allowing some burstiness in output stream and giving faster response to sudden bursts of input.


In the above circuit, we see a bucket   holding 3 tokens, with 5 packets waiting to be transmitted. For a packet to be transmitted, it must be capture and destroy one token. In the above example, 3 out of 5 packets have gotten through by capturing the 3 tokens in the bucket, but the other 2 are struck waiting for 2 more tokens to be generated. The implementation of the token bucket algorithm is just a variable that counts tokens. The counter is incremented by 1, every   DT and decremented by 1, when a packet is sent. When the counter hits ‘0’, no packets may be sent.
The major advantage of the token bucket algorithm is that it throws away tokens instead of packets, when the bucket fills up.

1 comment: