Multimedia Networking

7.7.1 Scheduling Mechanisms

Home
Introduction
7.1 Multimedia Networking Applications
7.1.1 Examples of Multimedia Applications
7.1.2 Hurdles for Multimedia in Today's Internet
7.1.3 How Should the Internet Evolve to Support Multimedia Better?
7.1.4 Audio and Video Compression
7.2 Streamimg Stored Audio and Video
7.2.1 Accessing Audio and Video Through a Web Server
7.2.2 Sending Multimedia from a Streaming Server to a Helper Application
7.2.3 Real-Time Streaming Protocol (RTSP)
7.3 Making the Best of the Best-Effort Service: An Internet Phone Example
7.3.1 The Limitations of a Best-Effort Service
7.3.2 Removing Jitter at the Receiver for Audio
7.3.3 Recovering from Packet Loss
7.4 Protocols for Real-Time Interactive Applications
7.4.1 RTP
7.4.2 RTP Control Protocol (RTCP)
7.4.3 SIP
7.4.4 H.323
7.5 Distributing Multimedia: Content Distribution Networks
7.6 Beyond Best Effort
7.6.1 Scenario 1: A 1 Mbps Audio Application and an FTP
7.6.2 Scenario 2: A 1 Mbps Audio Application and a High-Priority FTP Transfer
7.6.3 Scenario 3: A Misbehaving Audio Application and an FTP Transfer
7.6.4 Scenario 4: Two 1 Mbps Audio Applications over an Overload 1.5 Mbps Link
7.7 Scheduling and Policing Mechanisms
7.7.1 Scheduling Mechanisms
7.7.2 Policing: The Leaky Bucket
7.8 Intergrated Services and Differentiated Services
7.8.1 Intserv
7.8.2 Diffserv
7.9 RSVP
7.9.1 The Essence of RSVP
7.9.2 A Few Simple Examples
Scheduling Mechanisms

Packets belonging to various network flows are multiplexed and queued for transmission at the output buffers associated with a link.  The manner is which queued packets are selected for transmission on the link is known as the link-scheduling discipline
 
First-In-First-Out
Packets arriving at the link output queue wait for transmission if the link is currently busy transmitting another packet.  If there is not sufficient buffering space to hold the arriving packet, the queue's packet-discarding policy then determines whether the packet will be dropped (lost) or whether other packets will be removed from the queue to make space for the arriving packet.  When a packet is completely transmitted over the outgoing lint it is removed from the queue.
 
The FIFO scheduling discipline selects packets for link transmission in the same order in which they arrive at the output link queue.
 
Packet arrivals are indicated by numbered arrows above the upper timeline, with number indicating the order in which the packet arrived.  Because of the FIFO discipline packets leave in the same order in which they arrived.

kurose_320719_c07f23.gif

Priority Queuing
Under priority queuing, packets arriving at the output link are classified into priority classes at the output queue.  A packet's priority class may depend on an explicit marking that ir carries in tis packet header, its source or destination IP address, its destination port number, or other criteria.  Each priorty class typically has its own queue.  When choosing a packet to transmit, the priority queuing discipline will transmit a packet from the highest priority class that has a nonempty queue.  The choice among packets in the same priority class is typically done in a FIFO manner.

kurose_320719_c07f25.gif

Round Robin and Weighted Fair Queuing (WFQ)
Under the round robin queuing discipline, packets are sorted into classes as with priority queuing.  Rather than there being a strict priority of service among classes, a round robin scheduler alternates service among the classes.  In the simplest form of round robin scheduling, a class 1 packet is transmitted, followed by a class 2 packet, followed by a class 1 packet, followed by a class 2 packet, and so on.  A so-called work-conserving queuing discipline will never allow the link to remain idle whenever there are packets queued fro transmission.  A work-conserving round robin discipline that looks for a packet of a given class but finds none will immediately check the next class in the round robin seuqence.

kurose_320719_c07f27.gif

A generalized abstraction of round robin queuing that has found considerable use in QoS architectures is the so-called weighted fair queuing (WFQ) discipline.  Arriving packets are classified and queued in the appropriate per-class waiting area.  As in round robin scheduling, a WFQ@ scheduler will serve classes in a circular manner-first serving class 1, then serving class 2, then serving class 3, and then repeating the service pattern.  WFQ is also a work-conserving queuing discipline and thus will immediately move on to the next class in the service sequence when it finds an empty calss queue.
 
WFQ differs from round robin in that each class may receive a differential amount of service in any interval of time.  Each class, i, is assigned a weight, wi.  Under WFQ, during any interval of time during which there are class i packets to send, class i will then be guaranteed to receive a fraction of service equal to wil, where the sum in the denominator is taken over all classes that also have packets queued from transmission.

kurose_320719_c07f28.gif