LAN
SYSTEM
The
medium access control technique and topology
are
key characteristics used in the classification of LANs and in the
development
of standards. The following systems are discussed in this lesson:'
Ethernet
and Fast Ethernet (CSMA/CD)
Token
RingIFDDI
100VG-AnyLAN
ATM
LANs
Fibre
Channel
Wireless
LANs
Ethernet/Fast
ethernet(CSMA/CD)
The
most commonly used medium access control technique for busltree and star
topologies
is carrier-sense multiple access with collision detection (CSMAICD).
The
original baseband version of this technique was developed by Xerox as part of
the
Ethernet LAN. The original broadband version was developed by MITRE as
part
of its MITREnet LAN. All of this work formed the basis for the IEEE 802.3
standard.
In
this lesson, we will focus on the IEEE 802.3 standard. As with other LAN
standards,
there is both a medium access control layer and a physical layer, which
are
considered in turn in what follows.
IEEE 802.3 Medium Access Control
It
is easier to understand the operation of CSMAICD if we look first at some
earlier
schemes
from which CSMAICD evolved.
Precursors
CSMAICD
and its precursors can be termed random access, or contention, techniques.
They
are random access in the sense that there is no predictable or scheduled
time
for any station to transmit; station transmissions are ordered randomly.
They
exhibit contention in the sense that stations contend for time on the medium.
The
earliest of these techniques, known as ALOHA, was developed for
packet
radio networks. However, it is applicable to any shared transmission
medium.
ALOHA, or pure ALOHA as it is sometimes called, is a true free-for-all.
Whenever
a station has a frame to send, it does so. The station then listens for an
amount
of time equal to the maximum possible round-trip propagation delay on the
network
(twice the time it takes to send a frame between the two most widely separated
stations)
plus a small fixed time increment. If the station hears an acknowl-
edgment
during that time, fine; otherwise, it resends the frame. If the station fails
to
receive
an acknowledgment after repeated transmissions, it gives up. A receiving
station
determines the correctness of an incoming frame by examining a framecheck-
sequence
field, as in HDLC. If the frame is valid and if the destination
address
in the frame header matches the receiver's address, the station immediately
sends
an acknowledgment. The frame may be invalid due to noise on the channel
or
because another station transmitted a frame at about the same time. In the
latter
case,
the two frames may interfere with each other at the receiver so that neither
gets
through; this is known as a collision. If a received
frame is determined to be
invalid,
the receiving station simply ignores the frame.
ALOHA
is as simple as can be, and pays a penalty for it. Because the number
of
collisions rise rapidly with increased load, the maximum utilization of the
channel
is
only about 18% (see [STAL97]).
To
improve efiiciency, a modification of ALOHA, known as slotted ALOHA,
was
developed. In this scheme, time on the channel is organized into uniform slots
whose
size equals the frame transmission time. Some central clock or other technique
is
needed to synchronize all stations. Transmission is permitted to begin only
at
a slot boundary. Thus, frames that do overlap will do so totally. This
increases the
maximum
utilization of the system to about 37%.
Both
ALOHA and slotted ALOHA exhibit poor utilization. Both fail to take
advantage
of one of the key properties of both packet radio and LANs, which is that
propagation
delay between stations is usually very small compared to frame transmission
time.
Consider the following observations. If the station-to-station propagation
time
is large compared to the frame transmission time, then, after a station
launches
a frame, it will be a long time before other stations know about it. During
that
time, one of the other stations may transmit a frame; the two frames may
interfere
with
each other and neither gets through. Indeed, if the distances are great
enough,
many stations may begin transmitting, one after the other, and none of
their
frames get through unscathed. Suppose, however, that the propagation time is
small
compared to frame transmission time. In that case, when a station launches a
frame,
all the other stations know it almost immediately. So, if they had any sense,
they
would not try transmitting until the first station was done. Collisions would
be
rare
because they would occur only when two stations began to transmit almost
simultaneously.
Another way to look at it is that a short delay time provides the stations
with
better feedback about the state of the network; this information can be
used
to improve efficiency.
The
foregoing observations led to the development of carrier-sense multiple
access
(CSMA). With CSMA, a station wishing to transmit first listens to the
medium
to determine if another transmission is in progress (carrier sense). If the
medium
is in use, the station must wait. If the medium is idle, the station may
transmit.
It
may happen that two or more stations attempt to transmit at about the same
time.
If this happens, there will be a collision; the data from both transmissions
will
be
garbled and not received successfully. To account for this, a station waits a
reasonable
amount
of time, after transmitting, for an acknowledgment, taking into
account
the maximum round-trip propagation delay, and the fact that the acknowledging
station
must also contend for the channel in order to respond. If there is no
acknowledgment,
the station assumes that a collision has occurred and retransmits.
One
can see how this strategy would be effective for networks in which the
average
frame transmission time is much longer than the propagation time. Collisions
can
occur only when more than one user begins transmitting within a short
time
(the period of the propagation delay). If a station begins to transmit a frame,
and
there are no collisions during the time it takes for the leading edge of the
packet
to
propagate to the farthest station, then there will be no collision for this
frame
because
all other stations are now aware of the transmission.
The
maximum utilization achievable using CSMA can far exceed that of
ALOHA
or slotted ALOHA. The maximum utilization depends on the length of
the
frame and on the propagation time; the longer the frames or the shorter the
propagation
time, the higher the utilization. This subject is explored in Appendix
13A.
With
CSMA, an algorithm is needed to specify what a station should do if the
medium
is found busy. The most common approach, and the one used in IEEE
802.3,
is the 1-persistent technique. A station wishing to transmit listens to
the
medium
and obeys the following rules:
I. If the medium is idle, transmit;
otherwise, go to step 2.
2.
If the medium is busy, continue to listen until the channel is sensed idle;
then
transmit
immediately.
If
two or more stations are waiting to transmit, a collision is guaranteed.
Things
get sorted out only after the collision.
Description
of CSMAICD
CSMA,
although more efficient than ALOHA or slotted ALOHA, still has one
glaring
inefficiency: When two frames collide, the medium remains unusable for the
duration
of transmission of both damaged frames. For long frames, compared to
propagation
time, the amount of wasted capacity can be considerable. This waste
can
be reduced if a station continues to listen to the medium while transmitting.
This
leads to the following rules for CSMAICD:
1.
If
the medium is idle, transmit; otherwise, go to step 2.
2.
If the medium is busy, continue to listen until the channel is idle, then
transmit
immediately.
3.
If
a collision is detected during transmission, transmit a brief jamming signal
to
assure that all stations know that there has been a collision and then cease
transmission.
4.
After
transmitting the jamming signal, wait a random amount of time, then
attempt
to transmit again. (Repeat from step 1.)
Figure
13.1 illustrates the technique for a baseband bus. At time to, station A
begins
transmitting a packet addressed to D. At tl, both B and C are ready to
transmit.
B
senses a transmission and so defers. C, however, is still unaware of A's
transmission
and
begins its own transmission. When A's transmission reaches C, at t2, C
detects
the collision and ceases transmission. The effect of the collision propagates
back
to A, where it is detected some time later, t3, at which time A
ceases transmission.
With
CSMAICD, the amount of wasted capacity is reduced to the time it takes
to
detect a collision. Question: how long does that take? Let us consider first
the
case
of a baseband bus and consider two stations as far apart as possible. For
example,
in
Figure 13.1, suppose that station A begins a transmission and that just before
that
transmission reaches D, D is ready to transmit. Because D is not yet aware of
A's
transmission, it begins to transmit. A collision occurs almost immediately and
is
recognized
by D. However, the collision must propagate all the way back to A
before
A is aware of the collision. By this line of reasoning, we conclude that the
amount
of time that it takes to detect a collision is no greater than twice the
end-toend
propagation
delay. For a broadband bus, the delay is even longer. Figure 13.2
shows
a dual-cable system. This time, the worst case occurs for two stations as close
together
as possible and as far as possible from the headend. In this case, the maximum
time
to detect a collision is four times the propagation delay from an end of
the
cable to the headend.
An
important rule followed in most CSMAICD systems, including the IEEE
standard,
is that frames should be long enough to allow collision detection prior to
the
end of transmission. If shorter frames are used, then collision detection does
not
occur,
and CSMAKD exhibits the same performance as the less efficient CSMA
protocol.
Although
the implementation of CSMAICD is substantially the same for
baseband
and broadband, there are differences. One is the means for performing
carrier
sense; for baseband systems, this is done by detecting a voltage pulse train.
For
broadband, the RF carrier is detected.
Collision
detection also differs for the two systems. For baseband, a collision
should
produce substantially higher voltage swings than those produced by a single
transmitter.
Accordingly, the IEEE standard dictates that the transmitter will
detect
a collision if the signal on the cable at the transmitter tap point exceeds the
maximum
that could be produced by the transmitter alone. Because a transmitted
signal
attenuates as it propagates, there is a potential problem: If two stations far
apart
are transmitting, each station will receive a greatly attenuated signal from
the
other.
The signal strength could be so small that when it is added to the transmitted
signal
at the transmitter tap point, the combined signal does not exceed the CD
threshold.
For this reason, among others, the IEEE standard restricts the maximum
length
of coaxial cable to 500 m for 10BASE5 and to 200 m for 10BASE2.
A
much simpler collision detection scheme is possible with the twisted pair
star-topology
approach (Figure 12.13). In this case, collision detection is based on
logic
rather than on sensing voltage magnitudes. For any hub, if there is activity
(signal)
on more than one input, a collision is assumed. A special signal called the
collision
presence signal is generated. This signal is generated and sent out as long
as
activity is sensed on any of the input lines. This signal is interpreted by
every
node
as an occurrence of a collision.
There
are several possible approaches to collision detection in broadband systems.
The
most common of these is to perform a bit-by-bit comparison between
transmitted
and received data. When a station transmits on the inbound channel, it
begins
to receive its own transmission on the outbound channel after a propagation
delay
to the headend and back. Note the similarity to a satellite link. Another
approach,
for split systems, is for the headend to perform detection based on garbled
data.
MAC
Frame
Figure
13.3 depicts the frame format for the 802.3 protocol; it consists of the
following
fields:
Preamble.
A
7-octet pattern of alternating 0s and 1s used by the receiver to
establish
bit synchronization.
Start
frame delimiter. The
sequence 10101011, which indicates the actual start
of
the frame and which enables the receiver to locate the first bit of the rest
of
the frame.
Destination
address (DA). Specifies
the station(s) for which the frame is
intended.
It may be a unique physical address, a group address, or a global
address.
The choice of a 16- or 48-bit address length is an implementation
decision,
and must be the same for all stations on a particular LAN.
Source
address (SA). Specifies
the station that sent the frame.
Length.
Length
of the LLC data field.
LLC
data. Data
unit supplied by LLC.
Pad.
Octets
added to ensure that the frame is long enough for proper CD
operation.
Frame check sequence (FCS). A 32-bit cyclic
redundancy check, based on all
fields
except the preamble, the SFD, and the FCS.
IEEE
802.3
10-Mbgs Specifications (Ethernet)
The
IEEE
802.3
committee has been the most active in defining alternative physical
configurations;
this is both good and bad. On the good side, the standard has
been
responsive to evolving technology. On the bad side, the customer, not to
mention
the
potential vendor, is faced with a bewildering array of options. However, the
committee
has been at pains to ensure that the various options can be easily integrated
into
a configuration that satisfies a variety of needs. Thus, the user that has
a
complex set of requirements may find the flexibility and variety of the 802.3
standard
to
be an asset.
To
distinguish among the various implementations that are available, the
committee
has developed a concise notation:
<data
rate in Mbps><signaling method><maximum segment length in hundreds
of
meters>
The
defined alternatives are:
Note
that 10BASE-T and 10-BASE-F do not quite follow the notation; "T'
stands
for twisted pair, and "F" stands for optical fiber. Table 13.1
summarizes these
options.
All of the alternatives listed in the table specify a data rate of 10 Mbps. In
addition
to these alternatives, there are several versions that operate at 100 Mbps;
these
are covered later in this lesson.
1OBASES
Medium Specification
10BASE5
is the original 802.3 medium specification and is based on directly on
Ethernet.
10BASE5
specifies the use of 50-ohm coaxial cable and uses Manchester digital
signaling.3
The maximum length of a cable segment is set at 500 meters. The
length
of the network can be extended by the use of repeaters, which are transparent
to
the MAC level; as they do no buffering, they do not isolate one segment from
another.
So, for example, if two stations on different segments attempt to transmit
at
the same time, their transmissions will collide. To avoid looping, only one
path of
segments
and repeaters is allowed between any two stations. The standard allows a
maximum
of four repeaters in the path between any two stations, thereby extending
the
effective length of the medium to 2.5 kilometers.
l0BASE2
Medium Specification
To
provide a lower-cost system than 10BASE5 for personal computer LANs,
10BASE2
was added. As with 10BASE5, this specification uses 50-ohm coaxial
cable
and Manchester signaling. The key difference is that 10BASE2 uses a thinner
cable,
which supports fewer taps over a shorter distance than the 10BASE5 cable.
Because
they have the same data rate, it is possible to combine 10BASE5 and
10BASE2
segments in the same network, by using a repeater that conforms to
10BASE5
on one side and 10BASE2 on the other side. The only restriction is that
a
10BASE2 segment should not be used to bridge two 10BASE5 segments, because
a
"backbone" segment should be as resistant to noise as the segments it
connects.
10BASE-T
Medium Specification
By
sacrificing some distance, it is possible to develop a 10-Mbps LAN using the
unshielded
twisted pair medium. Such wire is often found prewired in office buildings
as
excess telephone cable, and can be used for LANs. Such an approach is specified
in
the 10BASE-T specification. The 10BASE-T specification defines a starshaped
topology.
A simple system consists of a number of stations connected to a
central
point, referred to as a multiport repeater, via two twisted pairs. The central
point
accepts input on any one line and repeats it on all of the other lines.
Stations
attach to the multiport repeater via a point-to-point link. Ordinarily,
the
link consists of two unshielded twisted pairs. Because of the high data rate
and
the
poor transmission qualities of unshielded twisted pair, the length of a link is
limited
to
100 meters. As an alternative, an optical fiber link may be used. In this case,
the
maximum length is 500 m.
10BROAD36
Medium Specification
The
10BROAD36 specification is the only 802.3 specification for broadband. The
medium
employed is the standard 75-ohm CATV coaxial cable. Either a dual-cable
or
split-cable configuration is allowed. The maximum length of an individual
segment,
emanating
from the headend, is 1800 meters; this results in a maximum endto-
end
span of 3600 meters.
The
signaling on the cable is differential phase-shift keying (DPSK). In ordinary
PSK,
a binary zero is represented by a carrier with a particular phase, and a
binary
one is represent by a carrier with the opposite phase (180-degree difference).
DPSK
makes use of differential encoding, in which a change of phase occurs when
a
zero occurs, and there is no change of phase when a one occurs. The advantage
of
differential
encoding is that it is easier for the receiver to detect a change in phase
than
to determine the phase itself.
The
characteristics of the modulation process are specified so that the resulting
10
Mbps signal fits into a 14 MHz bandwidth.
10BASE-F
Medium Specification
The
10BASE-F specification enables users to take advantage of the distance and
transmission
characteristics available with the use of optical fiber. The standard
actually
contains three specifications:
a 10-BASE-FP (passive). A passive-star
topology for interconnecting stations
and
repeaters with up to 1 km per segment.
a 10-BASE-FL (link). Defines a
point-to-point link that can be used to connect
stations
or repeaters at up to 2 km.
10-BASE-FB
(backbone). Defines
a point-to-point link that can be used to
connect
repeaters at up to 2 km.
All
three of these specifications make use of a pair of optical fibers for each
transmission
link, one for transmission in each direction. In all cases, the signaling
scheme
involves the use of Manchester encoding. Each Manchester signal element
is
then converted to an optical signal element, with the presence of light
corresponding
to
high and the absence of light corresponding to low. Thus, a 10-Mbps
Manchester
bit stream actually requires 20 Mbps on the fiber.
The
10-BASE-FP defines a passive star system that can support up to 33 stations
attached
to a central passive star, of the type described in Lesson 3. 10-
BASE-FL
and 10-BASE-FB define point-to-point connections that can be used to
extend
the length of a network; the key difference between the two is that 10-
BASE-FB
makes use of synchronous retransmission. With synchronous signaling,
an
optical signal coming into a repeater is retimed with a local clock and
retransmitted.
With
conventional asynchronous signaling, used with 10-BASE-FL, no such
retiming
takes place, so that any timing distortions are propagated through a series
of
repeaters. As a result, 10BASE-FB can be used to cascade up to 15 repeaters in
sequence
to achieve greater length.
IEEE
802.3 100-Mbps Specifications (Fast Ethernet)
Fast
Ethernet refers to a set of specifications developed by the IEEE 802.3
committee
to
provide a low-cost, Ethernet-compatible LAN operating at 100 Mbps. The
blanket
designation for these standards is 100BASE-T. The committee defined a
number
of alternatives to be used with different transmission media.
Figure
13.4 shows the terminology used in labeling the specifications and indicates
the
media used. All of the 100BASE-T options use the IEEE 802.3 MAC protocol
and
frame format. 100BASE-X refers to a set of options that use the physical
medium
specifications originally defined for Fiber Distributed Data Interface
(FDDI;
covered in the next lesson). All of the 100BASE-X schemes use two physical
links
between nodes: one for transmission and one for reception. 100BASE-TX
makes
use of shielded twisted pair (STP) or high-quality (Category 5) unshielded
twisted
pair (UTP). 100BASE-FX uses optical fiber.
In
many buildings, each of the 100BASE-X options requires the installation of
new
cable. For such cases, 100BASE-T4 defines a lower-cost alternative that can
use
Category-3, voice grade UTP in addition to the higher-quality Category 5 UTP.~
To
achieve the 100-Mbps data rate over lower-quality cable, 100BASE-T4 dictates
the
use of four twisted pair lines between nodes, with the data transmission making
use
of three pairs in one direction at a time.
100
BASE-X
For
all of the 100BASE-T options, the topology is similar to that of 10BASET,
namely
a star-wire topology.
Table
13.2 summarizes key characteristics of the 100BASE-T options.
For
all of the transmission media specified under 100BASE-X, a unidirectional
data
rate of 100 Mbps is achieved by transmitting over a single link (single twisted
pair,
single optical fiber). For all of these media, an efficient and effective
signal
encoding
scheme is required. The one chosen was originally defined for FDDI, and
can
be referred to as 4Bl5B-NRZI. See Appendix 13A for a description.
The
100BASE-X designation includes two physical-medium specifications,
one
for twisted pair, known as 100BASE-TX, and one for optical fiber, known as
100-BASE-FX.
100BASE-TX
makes use of two pairs of twisted pair cable, one pair used for
transmission
and one for reception. Both STP and Category 5 UTP are allowed.
The
MTL-3 signaling scheme is used (described in Appendix 13A).
100BASE-FX
makes use of two optical fiber cables, one for transmission and
one
for reception. With 100BASE-FX, a means is needed to convert the 4Bl5BNRZI
code
groups stream into optical signals. The technique used is known as
intensity
modulation. A binary 1 is represented by a burst or pulse of light; a binary
0
is represented by either the absence of a light pulse or by a light pulse at
very low
intensity.
100BASE-T4
100BASE-T4
is designed to produce a 100-Mbps data rate over lower-quality Category
3
cable, thus taking advantage of the large installed base of Category 3 cable
in
office buildings. The specification also indicates that the use of Category 5
cable
is
optional. 100BASE-T4 does not transmit a continuous signal between packets,
which
makes it useful in battery-powered applications.
For
100BASE-T4 using voice-grade Category 3 cable, it is not reasonable to
expect
to achieve 100 Mbps on a single twisted pair. Instead, 100BASE-T4 specifies
that
the data stream to be transmitted is split up into three separate data streams,
each
with an effective data rate of 33Mbps. Four twisted pairs are used. Data are
transmitted
using three pairs and received using three pairs. Thus, two of the pairs
must
be configured for bidirectional transmission.
As
with 100BASE-X, a simple NRZ encoding scheme is not used for
100BASE-T4;
this would require a signaling rate of 33 Mbps on each twisted pair
and
does not provide synchronization. Instead, a ternary signaling scheme known as
8B6T
is used .
TOKEN
RING/FDDI
Token
ring is the most commonly used MAC protocol for ring-topology LANs. In
this
lesson, we examine two standard LANs that use token ring: IEEE 802.5 and
FDDI.
IEEE
802.5 Medium Access Control
MAC
Protocol
The
token ring technique is based on the use of a small frame, called a token, that
circulates
when all stations are idle. A station wishing to transmit must wait until it
detects
a token passing by. It then seizes the token by changing one bit in the token,
which
transforms it from a token into a start-of-frame sequence for a data frame.
The
station then appends and transmits the remainder of the fields needed to construct
a
data frame.
When
a station seizes a
token and begins to transmit a data frame, there is no
token
on the ring, so other stations wishing to transmit must wait. The frame on the
ring
will make a round trip and be absorbed by the transmitting station. The
transmitting
station
will insert a new token on the ring when both of the following conditions
have
been met:
The
station has completed transmission of its frame.
The
leading edge of the transmitted frame has returned (after a complete
circulation
of
the ring) to the station.
If
the bit length of the ring is less than the frame length, the first condition
implies
the second; if not, a station could release a free token after it has finished
transmitting
but before it begins to receive its own transmission. The second condition
is
not strictly necessary, and is relaxed under certain circumstances. The
advantage
of
imposing the second condition is that it ensures that only one data frame at
a
time may be on the ring and that only one station at a time may be
transmitting,
thereby
simplifying error-recovery procedures.
Once
the new token has been inserted on the ring, the next station downstream
with
data to send will be able to seize the token and transmit. Figure 13.5
illustrates
the technique. In the example, A sends a packet to C, which receives it
and
then sends its own packets to A and D.
Nota
that under lightly loaded conditions, there is some inefficiency with
token
ring because a station must wait for the token to come around before
transmitting.
However,
under heavy loads, which is when it matters, the ring functions in
a
round-robin fashion, which is both efficient and fair. To see this, consider
the configuration
in
Figure 13.5. After station A transmits, it releases a token. The first station
with
an opportunity to transmit is D. If D transmits, it then releases a token and
C
has the next opportunity, and so on.
The
principal advantage of token ring is the flexible control over access that it
provides.
In the simple scheme just described, the access if fair. As we shall see,
schemes
can be used to regulate access to provide for priority and for guaranteed
bandwidth
services.
The
principal disadvantage of token ring is the requirement for token rnaintenance.
Loss
of the token prevents further utilization of the ring. Duplication of the
token
can also disrupt ring operation. One station must be selected as a monitor to
ensure
that exactly one token is on the ring and to ensure that a free token is
reinserted,
if
necessary.
MAC
Frame
Figure
13.6 depicts the frame format for the 802.5 protocol. It consists of the
following
fields:
Starting
delimiter (SD). Indicates
start of frame. The SD consists of signaling
patterns
that are distinguishable from data. It is coded as follows: JKOJKOOO,
where
J and K are nondata symbols. The actual form of a nondata symbol
depends
on the signal encoding on the medium.
Access
control (AC). Has
the format PPPTMRRR, where PPP and RRR are
3-bit
priority and reservation variables, and M is the monitor bit; their use is
explained
below. T indicates whether this is a token or data frame. In the case
of
a token frame, the only remaining field is ED.
Frame
control (FC). Indicates
whether this is an LLC data frame. If not, bits 7
in
this field control operation of the token ring MAC protocol.
Destination
address (DA). As
with 802.3.
Source
address (SA). As
with 802.3.
Data
unit. Contains
LLC data unit.
Frame
check sequence (FCS). As with 802.3.
End
delimiter (ED). Contains
the error-detection bit (E), which is set if
any
repeater detects an error, and the intermediate bit (I), which is used to
indicate
that this is a frame other than the final one of a multiple-frame
transmission.
FCS
Frame
status (FS). Contains
the address recognized (A) and frame-copied
(C)
bits, whose use is explained below. Because the A and C bits are outside
the
scope of the FCS, they are duplicated to provide a redundancy check to
detect
erroneous settings.
We
can now restate the token ring algorithm for the case when a single priority
is
used. In this case, the priority and reservation bits are set to 0. A station
wishing
to
transmit waits until a token goes by, as indicated by a token bit of 0 in the AC
field.
The station seizes the token by setting the token bit to 1. The SD and AC
fields
of the received token now function as the first two fields of the outgoing
frame.
The station transmits one or more frames, continuing until either its supply
of
frames is exhausted or a token-holding timer expires. When the AC field of the
last
transmitted frame returns, the station sets the token bit to 0 and appends an
ED
field,
resulting in the insertion of a new token on the ring.
Stations
in the receive mode listen to the ring. Each station can check passing
frames
for errors and can set the E bit to 1 if an error is
detected. If a station detects
its
own MAC address, it sets the A bit to 1; it may also copy
the frame, setting the
C
bit
to 1.
This
allows the originating station to differentiate three results of a frame
transmission:
e
Destination
station nonexistent or not active (A = 0, C = 0)
Destination
station exists but frame not copied ( A = 1, C = 0)
Frame
received (A
= 1, C = 1)
Token
Ring Priority
The
802.5
standard
includes a specification for an optional priority mechanism.
Eight
levels of priority are supported by providing two 3-bit fields in each data
frame
and token: a priority field and a reservation field. To explain the algorithm,
let
us define the following variables:
Pf = priority of frame to be transmitted by
station
P, = service priority: priority of current
token
Pr = value of P, as contained in
the last token received by this station
R, = reservation value in current token
Rr = highest reservation value in the frames
received by this station during
the
last token rotation
The
scheme works as follows:
1.
A
station wishing to transmit must wait for a token with P, 5 Pf.
2.
While
waiting, a station may reserve a future token at its priority level (Pf).
If
a data frame goes by, and if the reservation field is less than its priority
(R, < Pf), then the station
may set the reservation field of the frame to its
priority
(R,
t Pf). If a token frame goes by, and if (R, < Pf AND Pf < P.,),
then
the station sets the reservation field of the frame to its priority (R, c Pf).
This
setting has the effect of preempting any lower-priority reservation.
3.
When
a station seizes a token, it sets the token bit to 1 to start a data
frame,
sets
the reservation field of the data frame to 0, and leaves the
priority field
unchanged
(the same as that of the incoming token frame).
4. Following transmission of one or more data frames, a
station issues a new
token
with the priority and reservation fields set as indicated in Table 13.3.
The
effect of the above steps is to sort the competing claims and to allow the
waiting
transmission of highest priority to seize the token as soon as possible. A
moment's
reflection reveals that, as stated, the algorithm has a ratchet effect on
priority,
driving
it to the highest used level and keeping it there. To avoid this, a station
that
raises the priority (issues a token that has a higher priority than the token
that
it received) has the responsibility of later lowering the priority to its
previous
level.
Therefore, a station that raises priority must remember both the old and the
new
priorities and must downgrade the priority of the token at the appropriate
time.
In essence, each station is responsible for assuring that no token circulates
indefinitely
because its priority is too high. By remembering the priority of earlier
transmissions,
a station can detect this condition and downgrade the priority to a
previous,
lower priority or reservation.
To
implement the downgrading mechanism, two stacks are maintained by
each
station, one for reservations and one for priorities:
S,
=
stack
used to store new values of token priority
S,
=
stack
used to store old values of token priority
The
reason that stacks rather than scalar variables are required is that the
priority
can
be raised a number of times by one or more stations. The successive raises
must
be unwound in the reverse order.
To
summarize, a station having a higher priority frame to transmit than the
current
frame can reserve the next token for its priority level as the frame passes by.
When
the next token is issued, it will be at the reserved priority level. Stations
of
lower
priority cannot seize the token, so it passes to the reserving station or an
intermediate
station
with data to send of equal or higher priority level than the reserved
priority
level. The station that upgraded the priority level is responsible for
downgrading
it
to its former level when all higher-priority stations are finished. When
that
station sees a token at the higher priority after it has transmitted, it can
assume
that
there is no more higher-priority traffic waiting, and it downgrades the token
before
passing it on.
Figure
13.7 is an example. The following events occur:
1.
A
is transmitting a data frame to B at priority 0. When the frame has completed
a
circuit of the ring and returns to A, A will issue a token frame. However,
as
the data frame passes D, D makes a reservation at priority 3 by setting
the
reservation field to 3.
2.
A issues a token with the priority field set to 3.
3.
If
neither B nor C has data of priority 3 or greater to send, they cannot seize
the
token. The token circulates to D, which seizes the token and issues a data
frame.
4.
After D's data frame returns to D, D issues a new token at the same priority
as
the token that it received: priority 3.
5.
A sees a token at the priority level that it used to last issue a token; it
therefore
seizes
the token even if it has no data to send.
6.
A
issues a token at the previous priority level: priority 0.
Note
that, after A has issued a priority 3 token, any station with data of priority
3
or greater may seize the token. Suppose that at this point station C now has
priority
4 data to send. C will seize the token, transmit its data frame, and reissue a
priority
3 token, which is then seized by D. By the time that a priority 3 token
arrives
at A, all intervening stations with data of priority 3 or greater to send will
have
had the opportunity. It is now appropriate, therefore, for A to downgrade the
token.
Early
Token
Release.
When
a station issues a frame, if the bit length of the ring is less than that of
the
frame,
the leading edge of the transmitted frame will return to the transmitting sta
tion
before it has completed transmission; in this case, the station may issue a
token
as
soon as it has finished frame transmission. If the frame is shorter than the
bit
length
of the ring, then after a station has completed transmission of a frame, it
must
wait
until the leading edge of the frame returns before issuing a token. In this
latter
case,
some of the potential capacity of the ring is unused.
To
allow for more efficient ring utilization, an early token release (ETR)
option
has been added to the 802.5 standard. ETR allows a transmitting station to
release
a token as soon as it completes frame transmission, whether or not the
frame
header has returned to the station. The priority used for a token released
prior
to receipt of the previous frame header is the priority of the most recently
received
frame.
One
effect of ETR is that access delay for priority traffic may increase when
the
ring is heavily loaded with short frames. Because a station must issue a token
before
it can read the reservation bits of the frame it just transmitted, the station
will
not respond to reservations. Thus, the priority mechanism is at least partially
disabled.
Stations
that implement ETR are compatible and interoperable with those
that
do not complete such implementation.
IEEE 802.5 Physical
Layer Specification
The
802.5 standard (Table 13.4) specifies the use of shielded twisted pair with
data
rates
of 4 and 16 Mbps using Differential Manchester encoding. An earlier
specification
of
a 1-Mbps system has been dropped from the most recent edition of the
standard.
A
recent addition to the standard is the use of unshielded twisted pair at
4
Mbps.
FDDI Medium Access Control
FDDI
is a token ring scheme, similar to the IEEE 802.5 specification, that is
designed
for
both LAN and MAN applications. There are several differences that are designed
to
accommodate the higher data rate (100 Mbps) of FDDI.
MAC
Frame
Figure
13.8 depicts the frame format for the FDDI protocol. The standard defines
the
contents of this format in terms of symbols, with each data symbol
corresponding
to
4 data bits. Symbols are used because, at the physical layer, data are encoded
in
4-bit chunks. However, MAC entities, in fact, must deal with individual bits,
so
the
discussion that follows sometimes refers to 4-bit symbols and sometime to bits.
A
frame other than a token frame consists of the following fields:
Preamble.
Synchronizes
the frame with each station's clock. The originator of
the
frame uses a field of 16 idle symbols (64 bits); subsequent repeating stations
may
change the length of the field, as consistent with clocking requirements.
The
idle symbol is a nondata fill pattern. The actual form of a nondata
symbol
depends on the signal encoding on the medium.
Starting
delimiter (SD). Indicates
start of frame. It is coded as JK, where J and
K
are nondata symbols.
Frame
control (FC). Has
the bit format CLFFZZZZ, where C indicates
whether
this is a synchronous or asynchronous frame (explained below); L
indicates
the use of 16- or 48-bit addresses; FF indicates whether this is an
LLC,
MAC control, or reserved frame. For a control frame, the remaining 4
bits
indicate the type of control frame.
Destination
address (DA). Specifies
the station(s) for which the frame is
intended.
It may be a unique physical address, a multicast-group address, or a
broadcast
address. The ring may contain a mixture of 16- and 48-bit address
lengths.
Source
address (SA). Specifies
the station that sent the frame.
0
Information.
Contains
an LLC data unit or information related to a control
operation.
Frame
check sequence (FCS). A 32-bit cyclic redundancy check, based on the
FC,
DA, SA, and information fields.
Ending
delimiter (ED). Contains
a nondata symbol (T) and marks the end of
the
frame, except for the FS field.
0
Frame
Status (FS). Contains
the error detected (E), address recognized (A),
and
frame copied (F)
indicators.
Each indicator is represented by a symbol,
which
is R for "reset" or "false" and S for "set" or
"true."
A
token frame consists of the following fields:
Preamble.
As
above.
Starting
delimiter. As
above.
Frame
control (FC). Has
the bit format 10000000 or 11000000 to indicate that
this
is a token.
Ending
delimiter (ED). Contains
a pair of nondata symbols (T) that terminate
the
token frame.
A
comparison with the 802.5 frame (Figure 13.6) shows that the two are quite
similar.
The FDDI frame includes a preamble to aid in clocking, which is more
demanding
at the higher data rate. Both 16- and 48-bit addresses are allowed in the
same
network with FDDI; this is more flexible than the scheme used on all the 802
standards.
Finally, there are some differences in the control bits. For example,
FDDI
does not include priority and reservation bits; capacity allocation is handled
in
a different way, as described below.
MAC
Protocol
The
basic (without capacity allocation) FDDI MAC protocol is fundamentally the
same
as IEEE 802.5. There are two key differences:
1. In FDDI, a station waiting for a token
seizes the token by aborting (failing to
repeat)
the token transmission as soon as the token frame is recognized. After
the
captured token is completely received, the station begins transmitting one
or
more data frames. The 802.5 technique of flipping a bit to convert a token
to
the start of a data frame was considered impractical because of the high
data
rate of FDDI.
2. In FDDI, a station that has been
transmitting data frames releases a new
token
as soon as it completes data frame transmission, even if it has not begun
to
receive its own transmission. This is the same technique as the early token
release
option of 802.5. Again, because of the high data rate, it would be too
inefficient
to require the station to wait for its frame to return, as in normal
802.5
operation.
Figure
13.9 gives an example of ring operation. After station A has seized the
token,
it transmits frame F1, and immediately transmits a new token. F1 is
addressed
to station C, which copies it as it circulates past. The frame eventually
returns
to A, which absorbs it. Meanwhile, B seizes the token issued by A and transmits
F2
followed by a token. This action could be repeated any number of times, so
that,
at any one time, there may be multiple frames circulating the ring. Each
station
is
responsible for absorbing its own frames based on the source address field.
A
further word should be said about the frame status (FS) field. Each station
can
check passing bits for errors and can set the E indicator if an error is
detected.
If
a station detects its own address, it sets the A indicator; it may also copy
the
frame,
setting the C indicator; this allows the originating station, when it absorbs a
frame
that it previously transmitted, to differentiate among three conditions:
0
Station
nonexistent/nonactive
0
Station
active but frame not copied
0
Frame
copied
When
a frame is absorbed, the status indicators (E, A, C) in the FS field may be
examined
to determine the result of the transmission. However, if an error or failure
to
receive condition is discovered, the MAC protocol entitj does not attempt to
retransmit
the frame, but reports the condition to LLC. It is the responsibility of
LLC
or some higher-layer protocol to take corrective action.
Capacity
Allocation
The
priority scheme used in 802.5 will not work in FDDI, as a station will often
issue
a
token before its own transmitted frame returns. Hence, the use of a reservation
field
is not effective. Furthermore, the FDDI standard is intended to provide for
greater
control over the capacity of the network than 802.5 to meet the requirements
for
a high-speed LAN. Specifically, the FDDI capacity-allocation scheme
seeks
to accommodate a mixture of stream and bursty traffic,
To
accommodate this requirement, FDDI defines two types of traffic: synchronous
and
asynchronous. Each station is allocated a portion of the total capacity
(the
portion may be zero); the frames that it transmits during this time are
referred
to as synchronous frames. Any capacity that is not allocated or that is
allocated
but
not used is available for the transmission of additional frames, referred to
as
asynchronous frames.
The
scheme works as follows. A target token-rotation time (TTRT) is defined;
each
station stores the same value for TTRT. Some or all stations may be provided
a
synchronous allocation (SAi), which may,vary among stations. The
allocations
must
be such that
The
assignment of values for SAi is by means of a station management protocol
involving
the exchange of station management frames. The protocol assures that
the
above equation is satisfied. Initially, each station has a zero allocation, and
it
must
request a change in the allocation. Support for synchronous allocation is
optional;
a station that does not support synchronous allocation may only transmit
asynchronous
traffic.
All
stations have the same value of TTRT and a separately assigned value of
SAi.
In addition, several variables that are required for the operation of the
capacityallocation
algorithm
are maintained at each station:
* Token-rotation
timer (TRT)
* Token-holding
timer (THT)
* Late
counter (LC)
Each
station is initialized with TRT set equal to TTRT and LC set to zero.'
When
the timer is enabled, TRT begins to count down. If a token is received before
TRT
expires, TRT is reset to TTRT. If TRT counts down to 0 before a token is
received,
then LC is incremented to 1 and TRT is reset to TTRT and again begins
to
count down. IF TRT expires a second time before receiving a token, LC is
incremented
to
2, the token is considered lost, and a Claim process (described below) is
initiated.
Thus, LC records the number of times, if any, that TRT has expired since
the
token was last received at that station. The token is considered to have
arrived
early
if TRT has not expired since the station received the token-that is, if
LC
= 0.
When
a station receives the token, its actions will depend on whether the
token
is early or late. If the token is early, the station saves the remaining time
from
TRT
in THT, resets TRT, and enables TRT:
THT
ß TRT
TRT
ß TTRT
enable
TRT
The
station can then transmit according to the following rules:
1.
It
may transmit synchronous frames for a time SAi.
2.
After transmitting synchronous frames, or if there were no synchronous
frames
to transmit, THT is enabled. The station may begin transmission of
asynchronous
frames as long as THT >
0.
If
a station receives a token and the token is late, then LC is set to zero and
TRT
continues to run. The station can then transmit synchronous frames for a time
SAi.
The station may not transmit any asynchronous frames.
This
scheme is designed to assure that the time between successive sightings
of
a token is on the order of TTRT or less. Of this time, a given amount is always
available
for synchronous traffic, and any excess capacity is available for asynchronous
traffic.
Because of random fluctuations in traffic, the actual token-circulation
time
may exceed TTRT, as demonstrated below.
Figure
13.10 provides a simplified example of a 4-station ring. The following
assumptions
are made:
1.
Traffic
consists of fixed-length frames.
2.
TTRT
= 100 frame times.
3.
SAi
= 20 frame times
for each station.
4.
Each station is always prepared to send its full synchronous allocation as many
asynchronous
frames as possible.
5.
The
total overhead during one complete token circulation is 4 frame times
(one
frame time per station).
One
row of the table corresponds to one circulation of the token. For each
station,
the token arrival time is shown, followed by the value of TRT at the time
of
arrival, followed by the number of synchronous and asynchronous frames
transmitted
while
the station holds the token.
The
example begins after a period during which no data frames have been
sent,
so that the token has been circulating as rapidly as possible (4 frame times).
Thus,
when Station 1 receives the token at time 4, it measures a circulation time of
4
(its TRT =
96).
It is therefore able to send not only its 20 synchronous frames but
also
96 asynchronous frames; recall that THT is not enabled until after the station
has
sent its synchronous frames. Station 2 experiences a circulation time of 120
(20
frames +
96
frames +
4
overhead frames), but is nevertheless entitled to transmit
its
20 synchronous frames. Note that if each station continues to transmit its
maximum
allowable synchronous frames, then the circulation time surges to 180 (at
time
184), but soon stabilizes at approximately 100. With a total synchronous
utilization
of
80 and an overhead of 4 frame times, there is an average capacity of 16
frame
times available for asynchronous transmission. Note that if all stations always
have
a full backlog of asynchronous traffic, the opportunity to transmit asynchro
nous
frames is distributed among them.
FDDI
Physical Layer Specification
The
FDDI standard specifies a ring topology operating at 100 Mbps. Two media are
included
(Table 13.5). The optical fiber medium uses 4Bl5B-NRZI encoding. Two
twisted
pair media are specified: 100-ohm Category 5 unshielded twisted pair6 and
150-ohm
shielded twisted pair. For both twisted pair media, MLT-3 encoding is
used.
See Appendix 13A for a discussion of these encoding schemes.
100VG-ANYLAN
Like
100BASE-T, ~ O O V G - A ~ ~ iLs iAntNen~de d to be a 100-Mbps extension to the
10-Mbps
Ethernet and to support IEEE 802.3 frame types. It also provides compatibility
with
IEEE 802.5 token ring frames. 100VG-AnyLAN uses a new MAC
scheme
known as demand priority
to
determine the order in which nodes share the
network.
Because this specification does not use CSMAICD, it has been standardized
under
a new working group, lEEE 802.12, rather than allowed to remain in the
802.3
working group.
Topology
The
topology for a 100VG-AnyLAN network is hierarchical star. The simplest
configuration
consists
of a single central hub and a number of attached devices. More
complex
arrangements are possible, in which there is a single root hub, with one or
more
subordinate level-2 hubs; a level-2 hub can have additional subordinate hubs
at
level 3, and so on to an arbitrary depth.
Medium
Access Control
The
MAC algorithm for 802.12 is a round-robin scheme with two priority levels. We
first
describe the algorithm for a single-hub network and then discuss the general
case.
Single-Hub
Network
When
a station wishes to transmit a frame, it first issues a request to the central
hub
and
then awaits permission from the hub to transmit. A station must designate each
request
as normal-priority or high-priority.
The
central hub continually scans all of its ports for a request in round-robin
fashion.
Thus, an n-port hub looks for a request first on port 1, then on port 2, and
so
on up to port n. The scanning process then begins again at port 1. The hub
maintains
two
pointers: a high-priority pointer and a normal-priority pointer. During one
complete
cycle, the hub grants each high-priority request in the order in which the
requests
are encountered. If at any time there are no pending high-priority requests,
the
hub will grant any normal-priority requests that it encounters.
Figure
13.11 gives an example. The sequence of events is as follows:
1. The hub sets both pointers to port 1 and begins
scanning. The first request
encountered
is a low-priority request from port 2. The hub grants this request
and
updates the low-priority pointer to port 3.
2.
Port
2
transmits
a low-priority frame. The hub receives this frame and retransmits
it.
During this period, two high-priority requests are generated.
3.
Once
the frame from port 2 is transmitted, the hub begins granting
high-priority
requests
in round-robin order, beginning with port 1 and followed by
port
5.
The
high-priority pointer is set to port 6.
4.After
the high-priority frame from port 5 completes, there are no outstanding
high-priority
requests and the hub turns to the normal-priority requests.
Four
requests have arrived since the last low-priority frame was transmitted:
from
ports 2,7, 3, and 6. Because the normal-priority pointer is set to port 3,
these
requests will be granted in the order 3, 6, 7, and 2 if no other requests
intervene.
5.The
frames from ports 3, 6, and 7 are transmitted in turn. During the transmission
of
frame 7, a high-priority request arrives from port 1 and a normal
priority
request arrives from port 8. The hub sets the normal-priority pointer
to
port 8.
6.Because
high-priority requests take precedence, port 1 is granted access next.
7.After
the frame from port 1 is transmitted, the hub has two outstanding
normalpriority
requests.
The request from port 2 has been waiting the longest; however,
port
8 is next in round-robin order to be satisfied and so its request is
granted,
followed by that of port 2.
Hierarchical
Network
In
a hierarchical network, all of the end-system ports on all hubs are treated as a single
set
of ports for purposes of the round-robin algorithm. The hubs are configured
to
cooperate in scanning the ports in the proper order. Put another way, the set
of
hubs
is treated logically as a single hub.
Figure
13.12 indicates port ordering in a hierarchical network. The order is
generated
by traversing a tree representation of the network, in which the branches
under
each node in the tree are arranged in increasing order from left to right. With
this
convention, the port order is generated by traversing the tree in what is
referred
to
as preorder traversal, which is defined recursively as follows:
1. Visit the root.
2.
Traverse
the subtrees from left to right.
This
method of traversal is also known as a depth-first search of the tree.
Let
us now consider the mechanics of medium access and frame transmission
in
a hierarchical network. There are a number of contingencies to consider. First,
consider
the behavior of the root hub. This hub performs the high-priority and
normal-priority
round-robin algorithms for all directly attached devices. Thus, if
there
are one or more pending high-priority requests, the hub grants these requests
in
round-robin fashion. If there are no pending high-priority requests, the hub
grants
any normal-priority requests in round-robin fashion. When a request is
granted
by the root hub to a directly-attached end system, that system may immediately
transmit
a frame. When a request is granted by the root hub to a directlyattached
level-2
hub, then control passes to the level-2 hub, which then proceeds to
execute
its own round-robin algorithms.
Any
end system that is ready to transmit sends a request signal to the hub to
which
it attaches. If the end system is attached directly to the root hub, then the
request
is conveyed directly to the root hub. If the end system is attached to a
lowerlevel
hub,
then the request is transmitted directly to that hub. If that hub does not
currently
have control of the round-robin algorithm, then it passes the request up to
the
next higher-level hub. Eventually, all requests that are not granted at a lower
level
are passed up to the root hub.
The
scheme described so far does enforce a round-robin discipline among all
attached
stations, but two refinements are needed. First, a preemption mechanism
is
needed. This is best explained by an example. Consider the following sequence
of
events:
1. Suppose that the root hub (R) in Figure
13.12 is in control and that there are
no
high-priority requests pending anywhere in the network. However, stations
5-1,
5-2, and 5-3 have all issued normal-priority requests, causing hub B to
issue
a normal-priority request to R.
2.
R
will eventually grant this request, passing control to B.
3.
B
then proceeds to honor its outstanding requests one at a time.
4.
While
B is honoring its first normal-priority request, station 1-6 issues a
highpriority
request.
5.
In
response to the request from 1-6, R issues a preempt signal to B; this tells
B
to relinquish control after the completion of the current transmission.
6.
R
grants the request of 1-6 and then continues its round-robin algorithm.
The
second refinement is a mechanism to prevent a nonroot hub from retaining
control
indefinitely. To see the problem, suppose that B in Figure 13.12 has a
high-priority
request pending from 5-1. After receiving control from R, B grants the
request
to 5-1. Meanwhile, other stations subordinate to B issue high-priority re
quests.
B could continue in round-robin fashion to honor all of these requests. If
additional
requests arrive from other subordinates of B during these other transmissions,
then
B would be able to continue granting requests indefinitely, even
though
there are other high-priority requests pending elsewhere in the network. To
prevent
this kind of lockup, a subordinate hub may only retain control for a single
round-robin
cycle through all of its ports.
The
IEEE 802.12 MAC algorithm is quite effective. When multiple stations
offer
high loads, the protocol behaves much like a token ring protocol, with network
access
rotating among all high-priority requesters, followed by low-priority
requesters
when
there are no outstanding high-priority requests. At low load, the protocol
behaves
in a similar fashion to CSMAICD under low load: A single requester
gains
medium access almost immediately.
100VG-AnyLANPhysical
Layer Specification
The
current version of IEEE 801.12 calls for the use of 4-pair unshielded twisted
pair
(UTP) using Category 3,4, or 5 cable. Future versions will also support 2-pair
Category-5
UTP, shielded twisted pair, and fiber optic cabling. In all cases, the data
rate
is 100 Mbps.
Signal
Encoding
A
key objective of the 100VG-AnyLAN effort is to be able to achieve 100 Mbps
over
short distances using ordinary voice-grade (Category 3) cabling. The advantage
of
this is that in many existing buildings, there is an abundance of voice-grade
cabling
and very little else. Thus, if this cabling can be used, installation costs are
minimized.
With
present technology, a data rate of 100 Mbps over one or two Category 3
pairs
is impractical. To meet the objective, 100VG-AnyLAN specifies a novel
encoding
scheme that involves using four pair to transmit data in a half-duplex
mode.
Thus, to achieve a data rate of 100 Mbps, a data rate of only 25 Mbps is
needed
on each channel. An encoding scheme known as 5B6B is used. (See Appendix
13A
for a description.)
Data
from the MAC layer can be viewed as a stream of bits. The bits from this
stream
are taken five at a time to form a stream of quintets that are then passed
down
to the four transmission channels in round-robin fashion. Next, each quintet
passes
through a simple scrambling algorithm to increase the number of transitions
between
0 and 1 and to improve the signal spectrum. At this point, it might be
possible
to
simply transmit the data using NRZ. However, even with the scrambling,
the
further step of 5B6B encoding is used to ensure synchronization and also to
maintain
dc balance.
Because
the MAC frame is being divided among four channels, the beginning
and
ending of a MAC frame must be delimited on each of the channels, which is the
purpose
of the delimiter generators. Finally, NRZ transmission is used on each
channel.
ATM
LAN
A
document on customer premises networks jointly prepared by Apple, Bellcore,
Sun,
and Xerox [ABSX92] identifies three generations of LANs:
First
Generation. Typified
by the CSMNCD and Token Ring LANs. The first
generation
provided terminal-to-host connectivity and supported clientlserver
architectures
at moderate data rates.
Second
Generation. Typified
by FDDI. The second generation responds to
the
need for backbone LANs and for support of high-performance workstations.
Third
Generation. Typified
by ATM LANs. The third generation is designed
to
provide the aggregate throughputs and real-time transport guarantees that
are
needed for multimedia applications.
Typical
requirements for a third generation LAN include the following:
I.
Support multiple, guaranteed classes of service. A live video application, for
example,
may require a guaranteed 2-Mbps connection for acceptable performance,
while
a file transfer program can utilize a background class of service.
2.
Provide scalable throughput that is capable of growing in both per-host
capacity
(to
enable applications that require large volumes of data in and out of a
single
host) and in aggregate capacity (to enable installations to grow from a
few
to several hundred high-performance hosts).
3.
Facilitate
the interworking between LAN and WAN technology.
ATM
is ideally suited to these requirements. Using virtual paths and virtual
channels,
multiple classes of service are easily accommodated, either in a preconfigured
fashion
(permanent connections) or on demand (switched connections).
ATM
is easily scalable by adding more ATM switching nodes and using higher (or
lower)
data rates for attached devices. Finally, with the increasing acceptance of
cell-based
transport for wide-area networking, the use of ATM for a premises network
enables
seamless integration of LANs and WANs.
The
term ATM LAN has been used by vendors and researchers to apply to a
variety
of configurations. At the very least, an ATM LAN implies the use of ATM
as
a data transport protocol somewhere within the local premises. Among the
possible
types
of ATM LANs:
Gateway
to ATM WAN. An
ATM switch acts as a router and traffic concentrator
for
linking a premises network complex to an ATM WAN.
Backbone
ATM switch. Either
a single ATM switch or a local network of
ATM
switches interconnect other LANs.
Workgroup
ATM. High-performance
multimedia workstations and other end
systems
connect directly to an ATM switch.
These
are all "pure" configurations. In practice, a mixture of two or all
three
of
these types of networks is used to create an ATM LAN.
Figure
13.13 shows an example of a backbone ATM LAN that includes links
to
the outside world. In this example, the local ATM network consists of four
switches
interconnected with high-speed, point-to-point links running at the
standardized
ATM
rates of 155 and 622 Mbps. On the premises, there are three other
LANs,
each of which has a direct connection to one of the ATM switches. The data
rate
from an ATM switch to an attached LAN conforms to the native data rate of
that
LAN. For example, the connection to the FDDI network is at 100 Mbps. Thus,
the
switch must include some buffering and speed conversion capability to map the
data
rate from the attached LAN to an ATM data rate. The ATM switch must also
perform
some sort of protocol conversion from the MAC protocol used on the
attached
LAN to the ATM cell stream used on the ATM network. A simple
approach
is for each ATM switch that attaches to a LAN to function as a bridge or
router.'
An
ATM LAN configuration such as that shown in Figure 13.13 provides a
relatively
painless method for inserting a high-speed backbone into a local environment.
As
the on-site demand rises, it is a simple matter to increase the capacity of
the
backbone by adding more switches, increasing the throughput of each switch,
and
increasing the data rate of the trunks between switches. With this strategy,
the
load
on individual LANs within the premises can be increased, and. the number of
LANs
can grow.
However,
this simple backbone ATM LAN does not address all of the needs
for
local communications. In particular, in the simple backbone configuration, the
end
systems (workstations, servers, etc.) remain attached to shared-media LANs
with
the limitations on data rate imposed by the shared medium.
A
more advanced, and more powerful approach, is to use ATM technology in
a
hub. Figure 13.14 suggests the capabilities that can be provided with this
approach.
Each ATM hub includes a number of ports that operate at different data
rates
and that use different protocols. Typically, such a hub consists of a number of
rack-mounted
modules, with each module containing ports of a given data rate and
protocol.
The
key difference between the ATM hub shown in Figure 13.14 and the
ATM
nodes depicted in Figure 13.13 is the way in which individual end systems are
handled.
Notice that in the ATM hub, each end system has a dedicated point-topoint
link
to the hub. Each end system includes the communications hardware and
software
to interface to a particular type of LAN, but in each case, the LAN contains
only
two devices: the end system and the hub! For example, each device
attached
to a 10-Mbps Ethernet port operates using the CSMAICD protocol at
10
Mbps. However, because each end system has its own dedicated line, the effect
is
that each system has its own dedicated 10-Mbps Ethernet. Therefore, each end
system
can operate at close to the maximum 10-Mbps data rate.
The
use of a configuration such as that of either Figure 13.13 or 13.14 has the
advantage
that existing LAN installations and LAN hardware-so-called legacy
LANs-can
continue to be used while ATM technology is introduced. The disadvantage
is
that the use of such a mixed-protocol environment requires the implementation
of
some sort of protocol conversion capability, . A simpler approach, but one that requires that end systems be
equipped
with ATM capability, is to implement a "pure" ATM LAN.
One
issue that was not addressed in our discussion so far has to do with the
interoperability
of end systems on a variety of interconnected LANs. End systems
attached
directly to one of the legacy LANs implement the MAC layer appropriate
to
that type of LAN. End systems attached directly to an ATM network implement
the
ATM and AAL protocols. As a result, there are three areas of compatibility to
consider:
1.
Interaction
between an end system on an ATM network and an end system on
a
legacy LAN.
2.
Interaction between an end system on a legacy LAN and an end system on
another
legacy LAN of the same type (e.g., two IEEE 802.3 networks).
3.
Interaction
between an end system on a legacy LAN and an end system on
another
legacy LAN of a different type (e.g., an IEEE 802.3 network and an
IEEE
802.5 network).
No comments:
Post a Comment
silahkan membaca dan berkomentar