DATA LINK CONTROL
Our
discussion so far has concerned sending signals over a transmission link.
For
effective digital data communications, much more is needed to control
And
manage the exchange. In this lesson, we shift our emphasis to that of
sending
data over a data communications link. To achieve the
necessary control, a
layer
of logic is added above the physical interfacing discussed in Lesson 5; this
logic
is referred to as data link control or a data link control protocol. When
a data
link
control protocol is used, the transmission medium between systems is referred
to
as a data link.
To
see the need for data link control, we list some of the requirements and
objectives
for effective data communication between two directly connected transmitting-
receiving
stations:
Frame
synchronization. Data
are sent in blocks called frames. The beginning
and
end of each frame must be recognizable. We briefly introduced this topic
with
the discussion of synchronous frames (Figure 5.2).
Flow
control. The
sending station must not send frames at a rate faster then
the
receiving station can absorb them.
Error
control. Any
bit errors introduced by the transmission system must be
corrected.
Addressing.
On
a multipoint line, such as a local area network (LAN), the
identity
of the two stations involved in a transmission must be specified.
Control
and data on same link. It is usually not desirable to have a physically
separate
communications path for control information. Accordingly, the
receiver
must be able to distinguish control information from the data being
transmitted.
Link
management. The
initiation, maintenance, and termination of a sustained
data
exchange requires a fair amount of coordination and cooperation
among
stations. Procedures for the management of this exchange are
required.
None
of these requirements is satisfied by the physical interfacing techniques
described
in Lesson 6.
We
shall see in this lesson that a data link protocol that
satisfies
these requirements is a rather complex affair. We begin by looking at three
key
mechanisms that are part of data link control: flow control, error detection,
and
error
control. Following this background information, we look at the most important
example
of a data link control protocol: HDLC (high-level data link control).
This
protocol is important for two reasons: First, it is a widely used standardized
data
link control protocol. And secondly, HDLC serves as a baseline from which
virtually
all other important data link control protocols are derived. Following a
detailed
examination of HDLC, these other protocols are briefly surveyed. Finally,
an
lesson to this lesson addresses some performance issues relating to data link
control.
Flow
Control
Flow
control is a technique for assuring that a transmitting entity does not
overwhelm
a
receiving entity with data. The receiving entity typically allocates a data
buffer
of some maximum length for a transfer. When data are received, the receiver
must
do a certain amount of processing before passing the data to the higher-level
software.
In the absence of flow control, the receiver's buffer may fill up and overflow
while
it is processing old data.
To
begin, we examine mechanisms for flow control in the absence of errors.
The
model we will use is
depicted
in Figure 6.la, which is a vertical-time sequence
diagram.
It has the advantages of showing time dependencies and illustrating the
correct
send-receive relationship. Each arrow represents a single frame transiting a
data
link between two stations. The data are sent in a sequence of frames with each
frame
containing a portion of the data and some control information. For now, we
assume
that all frames that are transmitted are successfully received; no frames are
lost
and none arrive with errors. Furthermore, frames arrive in the same order in
which
they are sent. However, each transmitted frame suffers an arbitrary and
variable
amount
of delay before reception.
Stop-and-Wait Flow Control
The
simplest form of flow control, known as stop-and-wait flow control, works as
follows.
A source entity transmits a frame. After reception, the destination entity
indicates
its willingness to accept another frame by sending back an acknowledgment
to
the frame just received. The source must wait until it receives the
acknowledgment
before
sending the next frame. The destination can thus stop the flow of
data
by simply withholding acknowledgment. This procedure works fine and,
indeed,
can hardly be improved upon when a message is sent in a few large frames.
However,
it is often the case that a source will break up a large block of data into
smaller
blocks and transmit the data in many frames. This is done for the following
reasons:
The
buffer size of the receiver may be limited.
*
The
longer the transmission, the more likely that there will be an error,
necessitating
retransmission
of the entire frame. With smaller frames, errors are
detected
sooner, and a smaller amount of data needs to be retransmitted.
*
On
a shared medium, such as a LAN, it is usually desirable not to permit one
station
to occupy the medium for an extended period, as this causes long
delays
at the other sending stations.
With
the use of multiple frames for a single message, the stop-and-wait procedure
may
be inadequate. The essence of the problem is that only one frame at a
time
can be in transit. In situations where the bit length of the link is greater
than
the
frame length, serious inefficiencies result; this is illustrated in Figure 6.2.
In the
figure,
the transmission time (the time it takes for a station to transmit a frame) is
normalized
to one, and the propagation delay (the time it takes for a bit to travel
from
sender to receiver) is expressed as the variable a. In other words, when a is less
than
1, the propagation time is less than the transmission time. In this case, the
frame
is sufficiently long that the first bits of the frame have arrived at the
destination
before
the source has completed the transmission of the frame. When a is
greater
than 1, the propagation time is greater than the transmission time. In this
case,
the sender completes transmission of the entire frame before the leading bits
of
that frame arrive at the receiver. Put another way, larger values of a are consistent
with
higher data rates and/or longer distances between stations. Lesson 6A
discusses
a and data link
performance.
Both
parts of the figure (a and b) consist of a sequence of snapshots of the
transmission
process over time. In both cases, the first four snapshots show the
process
of transmitting a frame containing data, and the last snapshot shows
the
return of a small acknowledgment frame. Note that for a > 1, the line is
always
underutilized,
and, even for a
< 1, the line is
inefficiently utilized. In essence, for
very
high data rates, or for very long distances between sender and receiver,
stopand-
wait
flow control provides inefficient line utilization.
Sliding-Window
Flow Control
The
essence of the problem described so far is that only one frame at a time can be
in
transit. In situations where the bit length of the link is greater than the
frame
length
(a
> I), serious
inefficiencies result. Efficiency can be greatly improved by
allowing
multiple frames to be in transit at the same time.
Let
us examine how this might work for two stations, A and B, connected via
a
full-duplex link. Station B allocates buffer space for n frames. Thus, B
can accept
n frames, and A is allowed to send n frames without
waiting for any acknowledgments.
To
keep track of which frames have been acknowledged, each is labeled with
a
sequence number. B acknowledges a frame by sending an acknowledgment
that
includes
the sequence number of the next frame expected. This acknowledgment
also
implicitly announces that B is prepared to receive the next n frames, beginning
with
the number specified. This scheme can also be used to acknowledge multiple
frames.
For example, B could receive frames 2,3, and 4, but withhold acknowledgment
until
frame 4
has
arrived: by then returning an acknowledgment with
sequence
number 5, B acknowledges lrames 2,3, and 4 at one time. A maintains a
list
of sequence numbers that it is allowed to send, and B maintains a list of
sequence
numbers that it is prepared to receive. Each of these lists can be thought
of
as a window
of
frames. The operation is referred to as sliding-window flow
control.
Several
additional comments need to be made. Because the sequence number
to
be used occupies a field in the frame, it is clearly of bounded size. For
example,
for
a 3-bit field, the sequence number can range from 0 to 7. Accordingly,
frames
are
numbered modulo 8; that is, after sequence-number 7, the next number
is 0. In
general,
for a k-bit field the range of sequence numbers is 0 through 2k - 1, and
frames
are numbered modulo 2k; with this in mind, Figure 6.3 is a useful way of
depicting
the sliding-window process. It assumes the use of a 3-bit sequence number,
so
that frames are numbered sequentially from 0 through 7, and then the
same
numbers
are reused for subsequent frames. The shaded rectangle indicates that the
sender
may transmit 7
frames,
beginning with frame 6. Each time a frame is sent,
the
shaded window shrinks; each time an acknowledgment is received, the shaded
window
grows.
The
actual window size need not be the maximum possible size for a given
sequence-number
length. For example, using a 3-bit sequence number, a window
size
of 4 could be configured for the stations using the sliding-window flow control
protocol.
An
example is shown in Figure 6.4. The example assumes a 3-bit sequence
number
field and a maximum window size of seven frames. Initially, A and B have
windows
indicating that A may transmit seven frames, beginning with frame 0 (FO).
After
transmitting three frames (FO, F1, F2) without acknowledgment, A has
shrunk
its window to four frames. The window indicates that A may transmit four
frames,
beginning with frame number 3. B then transmits an RR (receive-ready)
3,
which
means: "I have received all frames up through frame number 2 and am ready
to
receive frame number 3; in fact, I am prepared to receive seven frames,
beginning
with
frame number 3." With this acknowledgment, A is back up to permission
to
transmit seven frames, still beginning with frame 3. A proceeds to transmit
frames
3, 4, 5,
and
6. B returns an RR 4,
which
allows A to send up to and including
frame
F2.
The
mechanism so far described does indeed provide a form of flow control:
The
receiver must only be able to accommodate 7 frames beyond the one it has last
acknowledged;
to supplement this, most protocols also allow a station to completely
cut
off the flow of frames from the other side by sending a Receive-Not-Ready
(RNR)
message, which acknowledges former frames but forbids transfer of future
frames.
Thus, RNR 5 means: "I have received all frames up through number 4 but
am
unable to accept any more." At some subsequent point, the station must
send a
normal
acknowledgment to reopen the window.
So
far, we have discussed transmission in one direction only. If two stations
exchange
data, each needs to maintain two windows, one for transmit and one for
receive,
and each side needs to send the data and acknowledgments to the other. To
provide
efficient support for this requirement, a feature known as piggybacking is
typically
provided. Each data frame includes a field that holds the sequence
number
of
that frame plus a field that holds the sequence number used for acknowledgment.
Thus,
if a station has data to send and an acknowledgment to send, it sends both
together
in one frame, thereby saving communication capacity. Of course, if a station
has
an acknowledgment but no data to send, it sends a separate acknowledgment
frame.
If
a station has data to send but no new acknowledgment to send, it
must
repeat the last acknowledgment that it sent; this is because the data frame
includes
a field for the acknowledgment number, and some value must be put into
that
field. When a station receives a duplicate acknowledgment, it simply ignores
it.
It
should be clear from the discussion that sliding-window flow control is
potentially
much more efficient than stop-and-wait flow control. The reason is that,
with
sliding-window flow control, the transmission link is treated as a pipeline
that
may
be filled with frames in transit. In contrast, with stop-and-wait flow control,
only
one frame may be in the pipe at a time.
Error
Detection
In
earlier lessons, we talked about transmission impairments and the effect of
data
rate
and signal-to-noise ratio on bit error rate. Regardless of the design of the
transmission
system,
there will be errors, resulting in the change of one or more bits in a
transmitted
frame.
Let
us define these probabilities with respect to errors in transmitted frames:
Pb:
Probability of a single bit error; also known as the bit error rate.
PI: Probability that a frame arrives with no
bit errors.
P2:
Probability that a frame arrives with one or more undetected bit errors.
P3:
Probability that a frame arrives with one or more detected bit errors but
no
undetected bit errors.
First,
consider the case when no means are taken to detect errors; the probability
of
detected errors (P3), then, is zero. To express the remaining probabilities,
assume
that the probability that any bit is in error (Pb) is constant and independent
for
each bit. Then we have
where
F is the number of bits per frame. In words, the probability that a frame
arrives
with no bit errors decreases when the probability of a single bit error
increases,
as you would expect. Also, the probability that a frame arrives with no bit
errors
decreases with increasing frame length; the longer the frame, the more bits it
has
and the higher the probability that one of these is in error.
Let
us take a simple example to illustrate these relationships. A defined object
for
ISDN connections is that the bit error rate on a 64-kbps channel should be less
than
l0^-6 on at least 90% of observed 1-minute intervals. Suppose now that we have
the
rather modest user requirement that at most one frame with an undetected bit
error
should occur per day on a continuously used 64-kbps channel, and let us
assume
a frame length of 1000 bits. The number of frames that can be transmitted
in
a day comes out to 5.529 X
l0
^ 6, which yields a desired frame error rate of
This
is the kind of result that motivates the use of error-detection techniques.
All
of these techniques operate on the following principle (Figure 6.5). For a
given
frame
of bits, additional bits that constitute an error-detecting code are added by
the
transmitter. This code is calculated as a function of the other transmitted
bits.
The
receiver performs the same calculation and compares the two results. A
detected
error occurs if and only if there is a mismatch. Thus, P3 is the
probability
that
if a frame contains errors, the error-detection scheme will detect that fact. P2
is
known
as the residual error rate, and is the probability that an error will be
undetected
despite
the use of an error-detection scheme.
Parity
Check
The
simplest error-detection scheme is to append a parity bit to the end of a block
of
data. A typical example is ASCII transmission, in which a parity bit is
attached
to
each 7-bit ASCII character. The value of this bit is selected so that the
character
has
an even number of 1s (even parity) or an odd number of 1s (odd parity). So, for
example,
if the transmitter is transmitting an ASCII G (1110001) and using odd parity,
it
will append a 1
and
transmit 11100011. The receiver examines the received
character
and, if the total number of 1s is odd, assumes that no error has occurred.
If
one bit (or any odd number of bits) is erroneously inverted during transmission
(for
example, 11QO0011), then the receiver will detect an error. Note, however, that
if
two (or any even number) of bits are inverted due to error, an undetected error
occurs.
Typically, even parity is used for synchronous transmission and odd parity
for
asynchronous transmission.
The
use of the parity bit is not foolproof, as noise impulses are often long
enough
to destroy more than one bit, particularly at high data rates.
Cyclic
Redundancy Check (CRC)
One
of the most common, and one of the most powerful, error-detecting codes is
the
cyclic redundancy check (CRC), which can be described as follows. Given a kbit
block
of bits, or message, the transmitter generates an n-bit sequence, known as
a
frame check sequence (FCS), so that the resulting frame, consisting of k + n bits,
is
exactly divisible by some predetermined number. The receiver then divides the
incoming
frame by that number and, if there is no remainder, assumes there was no
error.
To
clarify this, we present the procedure in three ways: modulo 2 arithmetic,
polynomials,
and digital logic.
Modulo
2
Arithmetic
Modulo
2 arithmetic uses binary addition with no carries, which is just the
exclusiveor
operation.
For example:
An
error E(X) will only be undetectable if it is divisible by P(X). It can be
shown
[PETE611
that all of the following errors are not divisible by a suitably chosen P(X)
and,
hence, are detectable:
All
single-bit errors.
All
double-bit errors, as long as P(X) has at least three Is.
Any
odd number of errors, as long as P(X) contains a factor (X + 1).
Any
burst error for which the length of the burst is less than the length of the
divisor
polynomial; that is, less than or equal to the length of the FCS.
Most
larger burst errors.
No comments:
Post a Comment
silahkan membaca dan berkomentar