The Token Bucket Algorithm
The leaky bucket algorithm enforces a rigid output pattern at
the average rate, no matter how bursty the traffic is. For many applications, it
is better to allow the output to speed up somewhat when large bursts arrive, so
a more flexible algorithm is needed, preferably one that never loses data. One
such algorithm is the token bucket algorithm.
In this algorithm, the leaky bucket holds tokens, generated by a clock at the
rate of one token every DT sec. In Fig.
5-34(a) we see a bucket holding three tokens, with five packets waiting to
be transmitted. For a packet to be transmitted, it must capture and destroy one
token. In Fig. 5-34(b) we see that three
of the five packets have gotten through, but the other two are stuck waiting for
two more tokens to be generated.
Figure 5-34. The token bucket algorithm. (a) Before. (b) After.
The token bucket algorithm provides a different kind of traffic
shaping than that of the leaky bucket algorithm. The leaky bucket algorithm does
not allow idle hosts to save up permission to send large bursts later. The token
bucket algorithm does allow saving, up to the maximum size of the bucket, n. This property means that bursts of up to n packets can be sent at once, allowing some burstiness
in the output stream and giving faster response to sudden bursts of input.
Another difference between the two algorithms is that the token
bucket algorithm throws away tokens (i.e., transmission capacity) when the
bucket fills up but never discards packets. In contrast, the leaky bucket
algorithm discards packets when the bucket fills up.
Here, too, a minor variant is possible, in which each token
represents the right to send not one packet, but k bytes. A packet can only be transmitted if enough
tokens are available to cover its length in bytes. Fractional tokens are kept
for future use.
The leaky bucket and token bucket algorithms can also be used
to smooth traffic between routers, as well as to regulate host output as in our
examples. However, one clear difference is that a token bucket regulating a host
can make the host stop sending when the rules say it must. Telling a router to
stop sending while its input keeps pouring in may result in lost data.
The implementation of the basic token bucket algorithm is just
a variable that counts tokens. The counter is incremented by one every DT and decremented by one
whenever a packet is sent. When the counter hits zero, no packets may be sent.
In the byte-count variant, the counter is incremented by k bytes every DT and decremented by the length of each packet
sent.
Essentially what the token bucket does is allow bursts, but up
to a regulated maximum length. Look at Fig.
5-33(c) for example. Here we have a token bucket with a capacity of 250 KB.
Tokens arrive at a rate allowing output at 2 MB/sec. Assuming the token bucket
is full when the 1-MB burst arrives, the bucket can drain at the full 25 MB/sec
for about 11 msec. Then it has to cut back to 2 MB/sec until the entire input
burst has been sent.
Calculating the length of the maximum rate burst is slightly
tricky. It is not just 1 MB divided by 25 MB/sec because while the burst is
being output, more tokens arrive. If we call the burst length S sec, the token bucket capacity C bytes, the token arrival rate r bytes/sec, and the maximum output rate M bytes/sec, we see that an output burst contains a
maximum of C + rS bytes. We also know that the number of bytes in a
maximum-speed burst of length S seconds is MS. Hence we have
We can solve this equation to get S = C/(M - r). For our parameters of
C = 250 KB, M = 25
MB/sec, and r=2 MB/sec, we get a burst time of about 11
msec. Figure 5-33(d) and Fig. 5-33(e) show the token bucket for capacities of 500
KB and 750 KB, respectively.
A potential problem with the token bucket algorithm is that it
allows large bursts again, even though the maximum burst interval can be
regulated by careful selection of r and M. It is frequently desirable to reduce the peak rate,
but without going back to the low value of the original leaky bucket.
One way to get smoother traffic is to insert a leaky bucket
after the token bucket. The rate of the leaky bucket should be higher than the
token bucket's r but lower than the maximum rate of the
network. Figure 5-33(f) shows the output
for a 500-KB token bucket followed by a 10-MB/sec leaky bucket.
Policing all these schemes can be a bit tricky. Essentially,
the network has to simulate the algorithm and make sure that no more packets or
bytes are being sent than are permitted. Nevertheless, these tools provide ways
to shape the network traffic into more manageable forms to assist meeting
quality-of-service requirements.
No comments:
Post a Comment
silahkan membaca dan berkomentar