The Leaky Bucket Algorithm
Imagine a bucket with a small hole in the bottom, as
illustrated in Fig. 5-32(a). No matter
the rate at which water enters the bucket, the outflow is at a constant rate,
r, when there is any water in the bucket and zero when
the bucket is empty. Also, once the bucket is full, any additional water
entering it spills over the sides and is lost (i.e., does not appear in the
output stream under the hole).
Figure 5-32. (a) A leaky bucket with water. (b) A leaky bucket with packets.
The same idea can be applied to packets, as shown in Fig. 5-32(b). Conceptually, each host is
connected to the network by an interface containing a leaky bucket, that is, a
finite internal queue. If a packet arrives at the queue when it is full, the
packet is discarded. In other words, if one or more processes within the host
try to send a packet when the maximum number is already queued, the new packet
is unceremoniously discarded. This arrangement can be built into the hardware
interface or simulated by the host operating system. It was first proposed by
Turner (1986) and is called the leaky bucket
algorithm. In fact, it is nothing other than a single-server queueing
system with constant service time.
The host is allowed to put one packet per clock tick onto the
network. Again, this can be enforced by the interface card or by the operating
system. This mechanism turns an uneven flow of packets from the user processes
inside the host into an even flow of packets onto the network, smoothing out
bursts and greatly reducing the chances of congestion.
When the packets are all the same size (e.g., ATM cells), this
algorithm can be used as described. However, when variable-sized packets are
being used, it is often better to allow a fixed number of bytes per tick, rather
than just one packet. Thus, if the rule is 1024 bytes per tick, a single
1024-byte packet can be admitted on a tick, two 512-byte packets, four 256-byte
packets, and so on. If the residual byte count is too low, the next packet must
wait until the next tick.
Implementing the original leaky bucket algorithm is easy. The
leaky bucket consists of a finite queue. When a packet arrives, if there is room
on the queue it is appended to the queue; otherwise, it is discarded. At every
clock tick, one packet is transmitted (unless the queue is empty).
The byte-counting leaky bucket is implemented almost the same
way. At each tick, a counter is initialized to n.
If the first packet on the queue has fewer bytes than the current value of the
counter, it is transmitted, and the counter is decremented by that number of
bytes. Additional packets may also be sent, as long as the counter is high
enough. When the counter drops below the length of the next packet on the queue,
transmission stops until the next tick, at which time the residual byte count is
reset and the flow can continue.
As an example of a leaky bucket, imagine that a computer can
produce data at 25 million bytes/sec (200 Mbps) and that the network also runs
at this speed. However, the routers can accept this data rate only for short
intervals (basically, until their buffers fill up). For long intervals, they
work best at rates not exceeding 2 million bytes/sec. Now suppose data comes in
1-million-byte bursts, one 40-msec burst every second. To reduce the average
rate to 2 MB/sec, we could use a leaky bucket with r=2
MB/sec and a capacity, C, of 1 MB. This means
that bursts of up to 1 MB can be handled without data loss and that such bursts
are spread out over 500 msec, no matter how fast they come in.
In Fig. 5-33(a) we see
the input to the leaky bucket running at 25 MB/sec for 40 msec. In Fig. 5-33(b) we see the output draining out
at a uniform rate of 2 MB/sec for 500 msec.
No comments:
Post a Comment
silahkan membaca dan berkomentar