3.4.2 A Protocol Using Go Back N
Until now we have made the tacit
assumption that the transmission time required for a frame to arrive at the
receiver plus the transmission time for the acknowledgement to come back is
negligible. Sometimes this assumption is clearly false. In these situations the
long round-trip time can have important implications for the efficiency of the
bandwidth utilization. As an example, consider a 50-kbps satellite channel with
a 500-msec round-trip propagation delay. Let us imagine trying to use protocol
4 to send 1000-bit frames via the satellite. At t = 0 the sender starts sending
the first frame. At t = 20 msec the frame has been completely sent. Not until t
= 270 msec has the frame fully arrived at the receiver, and not until t = 520
msec has the acknowledgement arrived back at the sender, under the best of circumstances
(no waiting in the receiver and a short acknowledgement frame). This means that
the sender was blocked during 500/520 or 96 percent of the time. In other
words, only 4 percent of the available bandwidth was used. Clearly, the
combination of a long transit time, high bandwidth, and short frame length is
disastrous in terms of efficiency.
The problem described above can be
viewed as a consequence of the rule requiring a sender to wait for an
acknowledgement before sending another frame. If we relax that restriction,
much better efficiency can be achieved. Basically, the solution lies in
allowing the sender to transmit up to w frames before blocking, instead of just
1. With an appropriate choice of w the sender will be able to continuously
transmit frames for a time equal to the round-trip transit time without filling
up the window. In the example above, w should be at least 26. The sender begins
sending frame 0 as before. By the time it has finished sending 26 frames, at t
= 520, the acknowledgement for frame 0 will have just arrived. Thereafter,
acknowledgements arrive every 20 msec, so the sender always gets permission to
continue just when it needs it. At all times, 25 or 26 unacknowledged frames
are outstanding. Put in other terms, the sender's maximum window size is 26.
The need for a large window on the
sending side occurs whenever the product of bandwidth x round-trip-delay is
large. If the bandwidth is high, even for a moderate delay, the sender will
exhaust its window quickly unless it has a large window. If the delay is high
(e.g., on a geostationary satellite channel), the sender will exhaust its
window even for a moderate bandwidth. The product of these two factors
basically tells what the capacity of the pipe is, and the sender needs the ability
to fill it without stopping in order to operate at peak efficiency.
This technique is known as pipelining.
If the channel capacity is b bits/sec, the frame size l bits, and the
round-trip propagation time R sec, the time required to transmit a single frame
is l/b sec. After the last bit of a data frame has been sent, there is a delay
of R/2 before that bit arrives at the receiver and another delay of at least R/2
for the acknowledgement to come back, for a total delay of R. In stop-and-wait
the line is busy for l/band idle for R, giving
If l < bR, the efficiency will be
less than 50 percent. Since there is always a nonzero delay for the
acknowledgement to propagate back, pipelining can, in principle, be used to
keep the line busy during this interval, but if the interval is small, the
additional complexity is not worth the trouble.
Pipelining frames over an unreliable
communication channel raises some serious issues. First, what happens if a
frame in the middle of a long stream is damaged or lost? Large numbers of
succeeding frames will arrive at the receiver before the sender even finds out
that anything is wrong. When a damaged frame arrives at the receiver, it
obviously should be discarded, but what should the receiver do with all the
correct frames following it? Remember that the receiving data link layer is
obligated to hand packets to the network layer in sequence. In Fig. 3-16 we see the effects of pipelining on
error recovery. We will now examine it in some detail.
Figure
3-16. Pipelining and error recovery. Effect of an error when (a) receiver's
window size is 1 and (b) receiver's window size is large.
Two basic approaches are available
for dealing with errors in the presence of pipelining. One way, called go back
n, is for the receiver simply to discard all subsequent frames, sending no
acknowledgements for the discarded frames. This strategy corresponds to a
receive window of size 1. In other words, the data link layer refuses to accept
any frame except the next one it must give to the network layer. If the
sender's window fills up before the timer runs out, the pipeline will begin to
empty. Eventually, the sender will time out and retransmit all unacknowledged
frames in order, starting with the damaged or lost one. This approach can waste
a lot of bandwidth if the error rate is high.
In Fig. 3-16(a) we see go back n for the case in
which the receiver's window is large. Frames 0 and 1 are correctly received and
acknowledged. Frame 2, however, is damaged or lost. The sender, unaware of this
problem, continues to send frames until the timer for frame 2 expires. Then it
backs up to frame 2 and starts all over with it, sending 2, 3, 4, etc. all over
again.
The other general strategy for
handling errors when frames are pipelined is called selective repeat. When it
is used, a bad frame that is received is discarded, but good frames received
after it are buffered. When the sender times out, only the oldest
unacknowledged frame is retransmitted. If that frame arrives correctly, the
receiver can deliver to the network layer, in sequence, all the frames it has
buffered. Selective repeat is often combined with having the receiver send a
negative acknowledgement (NAK) when it detects an error, for example, when it
receives a checksum error or a frame out of sequence. NAKs stimulate
retransmission before the corresponding timer expires and thus improve
performance.
In Fig. 3-16(b), frames 0 and 1 are again correctly
received and acknowledged and frame 2 is lost. When frame 3 arrives at the
receiver, the data link layer there notices that is has missed a frame, so it
sends back a NAK for 2 but buffers 3. When frames 4 and 5 arrive, they, too,
are buffered by the data link layer instead of being passed to the network
layer. Eventually, the NAK 2 gets back to the sender, which immediately resends
frame 2. When that arrives, the data link layer now has 2, 3, 4, and 5 and can
pass all of them to the network layer in the correct order. It can also
acknowledge all frames up to and including 5, as shown in the figure. If the
NAK should get lost, eventually the sender will time out for frame 2 and send
it (and only it) of its own accord, but that may be a quite a while later. In
effect, the NAK speeds up the retransmission of one specific frame.
Selective repeat corresponds to a
receiver window larger than 1. Any frame within the window may be accepted and
buffered until all the preceding ones have been passed to the network layer.
This approach can require large amounts of data link layer memory if the window
is large.
These two alternative approaches are
trade-offs between bandwidth and data link layer buffer space. Depending on
which resource is scarcer, one or the other can be used. Figure 3-17 shows a pipelining protocol in which
the receiving data link layer only accepts frames in order; frames following an
error are discarded. In this protocol, for the first time we have dropped the
assumption that the network layer always has an infinite supply of packets to
send. When the network layer has a packet it wants to send, it can cause a network_layer_ready
event to happen. However, to enforce the flow control rule of no more than MAX_SEQ
unacknowledged frames outstanding at any time, the data link layer must be able
to keep the network layer from bothering it with more work. The library
procedures enable_network_layer and disable_network_layer do this job.
Note that a maximum of MAX_SEQ
frames and not MAX_SEQ + 1 frames may be outstanding at any instant, even
though there are MAX_SEQ + 1 distinct sequence numbers: 0, 1, 2, ..., MAX_SEQ.
To see why this restriction is required, consider the following scenario with MAX_SEQ
= 7.
- The sender sends frames 0 through 7.
- A piggybacked acknowledgement for frame 7 eventually comes back to the sender.
- The sender sends another eight frames, again with sequence numbers 0 through 7.
- Now another piggybacked acknowledgement for frame 7 comes in.
The question is this: Did all eight
frames belonging to the second batch arrive successfully, or did all eight get
lost (counting discards following an error as lost)? In both cases the receiver
would be sending frame 7 as the acknowledgement. The sender has no way of
telling. For this reason the maximum number of outstanding frames must be restricted
to MAX_SEQ.
Although protocol 5 does not buffer
the frames arriving after an error, it does not escape the problem of buffering
altogether. Since a sender may have to retransmit all the unacknowledged frames
at a future time, it must hang on to all transmitted frames until it knows for
sure that they have been accepted by the receiver. When an acknowledgement
comes in for frame n, frames n - 1, n - 2, and so on are also automatically
acknowledged. This property is especially important when some of the previous
acknowledgement-bearing frames were lost or garbled. Whenever any
acknowledgement comes in, the data link layer checks to see if any buffers can
now be released. If buffers can be released (i.e., there is some room available
in the window), a previously blocked network layer can now be allowed to cause
more network_layer_ready events.
For this protocol, we assume that
there is always reverse traffic on which to piggyback acknowledgements. If
there is not, no acknowledgements can be sent. Protocol 4 does not need this
assumption since it sends back one frame every time it receives a frame, even
if it has just already sent that frame. In the next protocol we will solve the
problem of one-way traffic in an elegant way.
Because protocol 5 has multiple
outstanding frames, it logically needs multiple timers, one per outstanding
frame. Each frame times out independently of all the other ones. All of these
timers can easily be simulated in software, using a single hardware clock that
causes interrupts periodically. The pending timeouts form a linked list, with
each node of the list telling the number of clock ticks until the timer
expires, the frame being timed, and a pointer to the next node.
As an illustration of how the timers
could be implemented, consider the example of Fig. 3-18(a). Assume that the clock ticks once
every 100 msec. Initially, the real time is 10:00:00.0; three timeouts are
pending, at 10:00:00.5, 10:00:01.3, and 10:00:01.9. Every time the hardware
clock ticks, the real time is updated and the tick counter at the head of the
list is decremented. When the tick counter becomes zero, a timeout is caused
and the node is removed from the list, as shown in Fig. 3-18(b). Although this organization requires
the list to be scanned when start_timer or stop_timer is called, it does not
require much work per tick. In protocol 5, both of these routines have been
given a parameter, indicating which frame is to be timed.
No comments:
Post a Comment
silahkan membaca dan berkomentar