4.2
Multiple Access Protocols
Many algorithms for allocating a
multiple access channel are known. In the following sections we will study a
small sample of the more interesting ones and give some examples of their use.
In the 1970s, Norman Abramson and
his colleagues at the University of Hawaii devised a new and elegant method to
solve the channel allocation problem. Their work has been extended by many
researchers since then (Abramson, 1985). Although Abramson's work, called the
ALOHA system, used ground-based radio broadcasting, the basic idea is
applicable to any system in which uncoordinated users are competing for the use
of a single shared channel.
We will discuss two versions of
ALOHA here: pure and slotted. They differ with respect to whether time is
divided into discrete slots into which all frames must fit. Pure ALOHA does not
require global time synchronization; slotted ALOHA does.
The basic idea of an ALOHA system is
simple: let users transmit whenever they have data to be sent. There will be
collisions, of course, and the colliding frames will be damaged. However, due
to the feedback property of broadcasting, a sender can always find out whether
its frame was destroyed by listening to the channel, the same way other users
do. With a LAN, the feedback is immediate; with a satellite, there is a delay
of 270 msec before the sender knows if the transmission was successful. If
listening while transmitting is not possible for some reason, acknowledgements
are needed. If the frame was destroyed, the sender just waits a random amount
of time and sends it again. The waiting time must be random or the same frames
will collide over and over, in lockstep. Systems in which multiple users share
a common channel in a way that can lead to conflicts are widely known as contention
systems.
A sketch of frame generation in an
ALOHA system is given in Fig. 4-1. We have made the frames all the same
length because the throughput of ALOHA systems is maximized by having a uniform
frame size rather than by allowing variable length frames.
Whenever two frames try to occupy
the channel at the same time, there will be a collision and both will be
garbled. If the first bit of a new frame overlaps with just the last bit of a
frame almost finished, both frames will be totally destroyed and both will have
to be retransmitted later. The checksum cannot (and should not) distinguish
between a total loss and a near miss. Bad is bad.
An interesting question is: What is
the efficiency of an ALOHA channel? In other words, what fraction of all
transmitted frames escape collisions under these chaotic circumstances? Let us
first consider an infinite collection of interactive users sitting at their
computers (stations). A user is always in one of two states: typing or waiting.
Initially, all users are in the typing state. When a line is finished, the user
stops typing, waiting for a response. The station then transmits a frame
containing the line and checks the channel to see if it was successful. If so,
the user sees the reply and goes back to typing. If not, the user continues to
wait and the frame is retransmitted over and over until it has been
successfully sent.
Let the ''frame time'' denote the
amount of time needed to transmit the standard, fixed-length frame (i.e., the
frame length divided by the bit rate). At this point we assume that the
infinite population of users generates new frames according to a Poisson
distribution with mean N frames per frame time. (The infinite-population
assumption is needed to ensure that N does not decrease as users become
blocked.) If N > 1, the user community is generating frames at a higher rate
than the channel can handle, and nearly every frame will suffer a collision.
For reasonable throughput we would expect 0 < N < 1.
In addition to the new frames, the
stations also generate retransmissions of frames that previously suffered
collisions. Let us further assume that the probability of k transmission
attempts per frame time, old and new combined, is also Poisson, with mean G per
frame time. Clearly, G N. At low load (i.e., N 0), there will be few collisions, hence few
retransmissions, so G N. At high load there will be many
collisions, so G > N. Under all loads, the throughput, S, is just the
offered load, G, times the probability, P0, of a transmission
succeeding—that is, S = GP0, where P0 is the probability
that a frame does not suffer a collision.
A frame will not suffer a collision
if no other frames are sent within one frame time of its start, as shown in Fig. 4-2. Under what conditions will the shaded
frame arrive undamaged? Let t be the time required to send a frame. If any
other user has generated a frame between time t0 and t0 +
t, the end of that frame will collide with the beginning of the shaded one. In
fact, the shaded frame's fate was already sealed even before the first bit was
sent, but since in pure ALOHA a station does not listen to the channel before
transmitting, it has no way of knowing that another frame was already underway.
Similarly, any other frame started between t0 + t and t0
+ 2t will bump into the end of the shaded frame.
The probability that k frames are
generated during a given frame time is given by the Poisson distribution:
so the probability of zero frames is
just e-G. In an interval two frame times long, the mean number of
frames generated is 2G. The probability of no other traffic being initiated
during the entire vulnerable period is thus given by P0 = e -2G.
Using S = GP0, we get
The relation between the offered
traffic and the throughput is shown in Fig. 4-3. The maximum throughput occurs at G =
0.5, with S = 1/2e, which is about 0.184. In other words, the best we can hope
for is a channel utilization of 18 percent. This result is not very
encouraging, but with everyone transmitting at will, we could hardly have
expected a 100 percent success rate.
In 1972, Roberts published a method
for doubling the capacity of an ALOHA system (Roberts, 1972). His proposal was
to divide time into discrete intervals, each interval corresponding to one
frame. This approach requires the users to agree on slot boundaries. One way to
achieve synchronization would be to have one special station emit a pip at the
start of each interval, like a clock.
In Roberts' method, which has come
to be known as slotted ALOHA, in contrast to Abramson's pure ALOHA, a computer
is not permitted to send whenever a carriage return is typed. Instead, it is
required to wait for the beginning of the next slot. Thus, the continuous pure
ALOHA is turned into a discrete one. Since the vulnerable period is now halved,
the probability of no other traffic during the same slot as our test frame is e-G
which leads to
As you can see from Fig. 4-3, slotted ALOHA peaks at G = 1, with a
throughput of S =1/e or about 0.368, twice that of pure ALOHA. If the system is
operating at G = 1, the probability of an empty slot is 0.368 (from Eq. 4-2). The best we can hope for using slotted
ALOHA is 37 percent of the slots empty, 37 percent successes, and 26 percent
collisions. Operating at higher values of G reduces the number of empties but
increases the number of collisions exponentially. To see how this rapid growth
of collisions with G comes about, consider the transmission of a test frame.
The probability that it will avoid a collision is e-G, the
probability that all the other users are silent in that slot. The probability
of a collision is then just 1 - e-G. The probability of a
transmission requiring exactly k attempts, (i.e., k - 1 collisions followed by
one success) is
The expected number of
transmissions, E, per carriage return typed is then
As a result of the exponential
dependence of E upon G, small increases in the channel load can drastically
reduce its performance.
Slotted Aloha is important for a
reason that may not be initially obvious. It was devised in the 1970s, used in
a few early experimental systems, then almost forgotten. When Internet access
over the cable was invented, all of a sudden there was a problem of how to
allocate a shared channel among multiple competing users, and slotted Aloha was
pulled out of the garbage can to save the day. It has often happened that
protocols that are perfectly valid fall into disuse for political reasons
(e.g., some big company wants everyone to do things its way), but years later
some clever person realizes that a long-discarded protocol solves his current
problem. For this reason, in this chapter we will study a number of elegant
protocols that are not currently in widespread use, but might easily be used in
future applications, provided that enough network designers are aware of them.
Of course, we will also study many protocols that are in current use as well.
With slotted ALOHA the best channel
utilization that can be achieved is 1/e. This is hardly surprising, since with
stations transmitting at will, without paying attention to what the other stations
are doing, there are bound to be many collisions. In local area networks,
however, it is possible for stations to detect what other stations are doing,
and adapt their behavior accordingly. These networks can achieve a much better
utilization than 1/e. In this section we will discuss some protocols for
improving performance.
Protocols in which stations listen
for a carrier (i.e., a transmission) and act accordingly are called carrier
sense protocols. A number of them have been proposed. Kleinrock and Tobagi
(1975) have analyzed several such protocols in detail. Below we will mention
several versions of the carrier sense protocols.
The first carrier sense protocol
that we will study here is called 1-persistent CSMA (Carrier Sense Multiple
Access). When a station has data to send, it first listens to the channel to
see if anyone else is transmitting at that moment. If the channel is busy, the
station waits until it becomes idle. When the station detects an idle channel,
it transmits a frame. If a collision occurs, the station waits a random amount
of time and starts all over again. The protocol is called 1-persistent because
the station transmits with a probability of 1 when it finds the channel idle.
The propagation delay has an
important effect on the performance of the protocol. There is a small chance
that just after a station begins sending, another station will become ready to
send and sense the channel. If the first station's signal has not yet reached
the second one, the latter will sense an idle channel and will also begin
sending, resulting in a collision. The longer the propagation delay, the more
important this effect becomes, and the worse the performance of the protocol.
Even if the propagation delay is
zero, there will still be collisions. If two stations become ready in the
middle of a third station's transmission, both will wait politely until the
transmission ends and then both will begin transmitting exactly simultaneously,
resulting in a collision. If they were not so impatient, there would be fewer
collisions. Even so, this protocol is far better than pure ALOHA because both
stations have the decency to desist from interfering with the third station's
frame. Intuitively, this approach will lead to a higher performance than pure
ALOHA. Exactly the same holds for slotted ALOHA.
A second carrier sense protocol is nonpersistent
CSMA. In this protocol, a conscious attempt is made to be less greedy than in
the previous one. Before sending, a station senses the channel. If no one else
is sending, the station begins doing so itself. However, if the channel is
already in use, the station does not continually sense it for the purpose of
seizing it immediately upon detecting the end of the previous transmission. Instead,
it waits a random period of time and then repeats the algorithm. Consequently,
this algorithm leads to better channel utilization but longer delays than
1-persistent CSMA.
The last protocol is p-persistent
CSMA. It applies to slotted channels and works as follows. When a station
becomes ready to send, it senses the channel. If it is idle, it transmits with
a probability p. With a probability q = 1 - p, it defers until the next slot.
If that slot is also idle, it either transmits or defers again, with probabilities
p and q. This process is repeated until either the frame has been transmitted
or another station has begun transmitting. In the latter case, the unlucky
station acts as if there had been a collision (i.e., it waits a random time and
starts again). If the station initially senses the channel busy, it waits until
the next slot and applies the above algorithm. Figure 4-4 shows the computed throughput versus
offered traffic for all three protocols, as well as for pure and slotted ALOHA.
Persistent and nonpersistent CSMA
protocols are clearly an improvement over ALOHA because they ensure that no
station begins to transmit when it senses the channel busy. Another improvement
is for stations to abort their transmissions as soon as they detect a
collision. In other words, if two stations sense the channel to be idle and
begin transmitting simultaneously, they will both detect the collision almost
immediately. Rather than finish transmitting their frames, which are
irretrievably garbled anyway, they should abruptly stop transmitting as soon as
the collision is detected. Quickly terminating damaged frames saves time and
bandwidth. This protocol, known as CSMA/CD (CSMA with Collision Detection) is
widely used on LANs in the MAC sublayer. In particular, it is the basis of the
popular Ethernet LAN, so it is worth devoting some time to looking at it in
detail.
CSMA/CD, as well as many other LAN
protocols, uses the conceptual model of Fig. 4-5. At the point marked t0, a
station has finished transmitting its frame. Any other station having a frame
to send may now attempt to do so. If two or more stations decide to transmit
simultaneously, there will be a collision. Collisions can be detected by
looking at the power or pulse width of the received signal and comparing it to
the transmitted signal.
After a station detects a collision,
it aborts its transmission, waits a random period of time, and then tries
again, assuming that no other station has started transmitting in the meantime.
Therefore, our model for CSMA/CD will consist of alternating contention and
transmission periods, with idle periods occurring when all stations are quiet
(e.g., for lack of work).
Now let us look closely at the
details of the contention algorithm. Suppose that two stations both begin
transmitting at exactly time t0. How long will it take them to
realize that there has been a collision? The answer to this question is vital
to determining the length of the contention period and hence what the delay and
throughput will be. The minimum time to detect the collision is then just the
time it takes the signal to propagate from one station to the other.
Based on this reasoning, you might
think that a station not hearing a collision for a time equal to the full cable
propagation time after starting its transmission could be sure it had seized
the cable. By ''seized,'' we mean that all other stations knew it was
transmitting and would not interfere. This conclusion is wrong. Consider the
following worst-case scenario. Let the time for a signal to propagate between
the two farthest stations be t. At t0, one station begins transmitting. At t
- e, an instant before the signal arrives at the most distant
station, that station also begins transmitting. Of course, it detects the
collision almost instantly and stops, but the little noise burst caused by the
collision does not get back to the original station until time 2t
- e. In other words, in the worst case a station cannot be sure
that it has seized the channel until it has transmitted for 2t
without hearing a collision. For this reason we will model the contention
interval as a slotted ALOHA system with slot width 2t.
On a 1-km long coaxial cable, t 5 µsec. For simplicity we will assume that
each slot contains just 1 bit. Once the channel has been seized, a station can
transmit at any rate it wants to, of course, not just at 1 bit per 2t
sec.
It is important to realize that
collision detection is an analog process. The station's hardware must listen to
the cable while it is transmitting. If what it reads back is different from
what it is putting out, it knows that a collision is occurring. The implication
is that the signal encoding must allow collisions to be detected (e.g., a
collision of two 0-volt signals may well be impossible to detect). For this
reason, special encoding is commonly used.
It is also worth noting that a
sending station must continually monitor the channel, listening for noise
bursts that might indicate a collision. For this reason, CSMA/CD with a single
channel is inherently a half-duplex system. It is impossible for a station to
transmit and receive frames at the same time because the receiving logic is in
use, looking for collisions during every transmission.
To avoid any misunderstanding, it is
worth noting that no MAC-sublayer protocol guarantees reliable delivery. Even
in the absence of collisions, the receiver may not have copied the frame
correctly for various reasons (e.g., lack of buffer space or a missed
interrupt).
Although collisions do not occur
with CSMA/CD once a station has unambiguously captured the channel, they can
still occur during the contention period. These collisions adversely affect the
system performance, especially when the cable is long (i.e., large t)
and the frames are short. And CSMA/CD is not universally applicable. In this
section, we will examine some protocols that resolve the contention for the
channel without any collisions at all, not even during the contention period.
Most of these are not currently used in major systems, but in a rapidly
changing field, having some protocols with excellent properties available for
future systems is often a good thing.
In the protocols to be described, we
assume that there are exactly N stations, each with a unique address from 0 to N
- 1 ''wired'' into it. It does not matter that some stations may be inactive
part of the time. We also assume that propagation delay is negligible. The
basic question remains: Which station gets the channel after a successful
transmission? We continue using the model of Fig. 4-5 with its discrete contention slots.
In our first collision-free
protocol, the basic bit-map method, each contention period consists of exactly N
slots. If station 0 has a frame to send, it transmits a 1 bit during the zeroth
slot. No other station is allowed to transmit during this slot. Regardless of
what station 0 does, station 1 gets the opportunity to transmit a 1 during slot
1, but only if it has a frame queued. In general, station j may announce that
it has a frame to send by inserting a 1 bit into slot j. After all N slots have
passed by, each station has complete knowledge of which stations wish to
transmit. At that point, they begin transmitting in numerical order (see Fig. 4-6).
Since everyone agrees on who goes
next, there will never be any collisions. After the last ready station has
transmitted its frame, an event all stations can easily monitor, another N bit
contention period is begun. If a station becomes ready just after its bit slot
has passed by, it is out of luck and must remain silent until every station has
had a chance and the bit map has come around again. Protocols like this in
which the desire to transmit is broadcast before the actual transmission are called
reservation protocols.
Let us briefly analyze the
performance of this protocol. For convenience, we will measure time in units of
the contention bit slot, with data frames consisting of d time units. Under
conditions of low load, the bit map will simply be repeated over and over, for
lack of data frames.
Consider the situation from the
point of view of a low-numbered station, such as 0 or 1. Typically, when it
becomes ready to send, the ''current'' slot will be somewhere in the middle of
the bit map. On average, the station will have to wait N/2 slots for the
current scan to finish and another full N slots for the following scan to run
to completion before it may begin transmitting.
The prospects for high-numbered
stations are brighter. Generally, these will only have to wait half a scan (N/2
bit slots) before starting to transmit. High-numbered stations rarely have to
wait for the next scan. Since low-numbered stations must wait on average 1.5N
slots and high-numbered stations must wait on average 0.5N slots, the mean for
all stations is N slots. The channel efficiency at low load is easy to compute.
The overhead per frame is N bits, and the amount of data is d bits, for an
efficiency of d/(N + d).
At high load, when all the stations
have something to send all the time, the N bit contention period is prorated
over N frames, yielding an overhead of only 1 bit per frame, or an efficiency
of d/(d + 1). The mean delay for a frame is equal to the sum of the time it
queues inside its station, plus an additional N(d + 1)/2 once it gets to the
head of its internal queue.
A problem with the basic bit-map
protocol is that the overhead is 1 bit per station, so it does not scale well
to networks with thousands of stations. We can do better than that by using
binary station addresses. A station wanting to use the channel now broadcasts
its address as a binary bit string, starting with the high-order bit. All
addresses are assumed to be the same length. The bits in each address position
from different stations are BOOLEAN ORed together. We will call this protocol binary
countdown. It was used in Datakit (Fraser, 1987). It implicitly assumes that
the transmission delays are negligible so that all stations see asserted bits
essentially instantaneously.
To avoid conflicts, an arbitration
rule must be applied: as soon as a station sees that a high-order bit position
that is 0 in its address has been overwritten with a 1, it gives up. For
example, if stations 0010, 0100, 1001, and 1010 are all trying to get the
channel, in the first bit time the stations transmit 0, 0, 1, and 1,
respectively. These are ORed together to form a 1. Stations 0010 and 0100 see
the 1 and know that a higher-numbered station is competing for the channel, so
they give up for the current round. Stations 1001 and 1010 continue.
The next bit is 0, and both stations
continue. The next bit is 1, so station 1001 gives up. The winner is station
1010 because it has the highest address. After winning the bidding, it may now
transmit a frame, after which another bidding cycle starts. The protocol is
illustrated in Fig. 4-7. It has the property that
higher-numbered stations have a higher priority than lower-numbered stations,
which may be either good or bad, depending on the context.
The channel efficiency of this
method is d/(d + log2 N). If, however, the frame format has been
cleverly chosen so that the sender's address is the first field in the frame,
even these log2 N bits are not wasted, and the efficiency is 100
percent.
Mok and Ward (1979) have described a
variation of binary countdown using a parallel rather than a serial interface.
They also suggest using virtual station numbers, with the virtual station
numbers from 0 up to and including the successful station being circularly
permuted after each transmission, in order to give higher priority to stations
that have been silent unusually long. For example, if stations C, H, D, A, G, B,
E, F have priorities 7, 6, 5, 4, 3, 2, 1, and 0, respectively, then a
successful transmission by D puts it at the end of the list, giving a priority
order of C, H, A, G, B, E, F, D. Thus, C remains virtual station 7, but A moves
up from 4 to 5 and D drops from 5 to 0. Station D will now only be able to
acquire the channel if no other station wants it.
Binary countdown is an example of a
simple, elegant, and efficient protocol that is waiting to be rediscovered.
Hopefully, it will find a new home some day.
We have now considered two basic
strategies for channel acquisition in a cable network: contention, as in CSMA,
and collision-free methods. Each strategy can be rated as to how well it does
with respect to the two important performance measures, delay at low load and
channel efficiency at high load. Under conditions of light load, contention
(i.e., pure or slotted ALOHA) is preferable due to its low delay. As the load
increases, contention becomes increasingly less attractive, because the
overhead associated with channel arbitration becomes greater. Just the reverse
is true for the collision-free protocols. At low load, they have high delay,
but as the load increases, the channel efficiency improves rather than gets
worse as it does for contention protocols.
Obviously, it would be nice if we
could combine the best properties of the contention and collision-free
protocols, arriving at a new protocol that used contention at low load to
provide low delay, but used a collision-free technique at high load to provide
good channel efficiency. Such protocols, which we will call limited-contention
protocols, do, in fact, exist, and will conclude our study of carrier sense
networks.
Up to now the only contention
protocols we have studied have been symmetric, that is, each station attempts
to acquire the channel with some probability, p, with all stations using the
same p. Interestingly enough, the overall system performance can sometimes be
improved by using a protocol that assigns different probabilities to different
stations.
Before looking at the asymmetric
protocols, let us quickly review the performance of the symmetric case. Suppose
that k stations are contending for channel access. Each has a probability p of
transmitting during each slot. The probability that some station successfully
acquires the channel during a given slot is then kp(1 - p)k - 1.
To find the optimal value of p, we differentiate with respect to p, set the
result to zero, and solve for p. Doing so, we find that the best value of p is
1/k. Substituting p = 1/k, we get
This probability is plotted in Fig. 4-8. For small numbers of stations, the
chances of success are good, but as soon as the number of stations reaches even
five, the probability has dropped close to its asymptotic value of 1/e.
From Fig. 4-8, it is fairly obvious that the
probability of some station acquiring the channel can be increased only by
decreasing the amount of competition. The limited-contention protocols do
precisely that. They first divide the stations into (not necessarily disjoint)
groups. Only the members of group 0 are permitted to compete for slot 0. If one
of them succeeds, it acquires the channel and transmits its frame. If the slot
lies fallow or if there is a collision, the members of group 1 contend for slot
1, etc. By making an appropriate division of stations into groups, the amount
of contention for each slot can be reduced, thus operating each slot near the
left end of Fig. 4-8.
The trick is how to assign stations
to slots. Before looking at the general case, let us consider some special
cases. At one extreme, each group has but one member. Such an assignment
guarantees that there will never be collisions because at most one station is
contending for any given slot. We have seen such protocols before (e.g., binary
countdown). The next special case is to assign two stations per group. The
probability that both will try to transmit during a slot is p2,
which for small p is negligible. As more and more stations are assigned to the
same slot, the probability of a collision grows, but the length of the bit-map
scan needed to give everyone a chance shrinks. The limiting case is a single
group containing all stations (i.e., slotted ALOHA). What we need is a way to
assign stations to slots dynamically, with many stations per slot when the load
is low and few (or even just one) station per slot when the load is high.
One particularly simple way of
performing the necessary assignment is to use the algorithm devised by the U.S.
Army for testing soldiers for syphilis during World War II (Dorfman, 1943). In
short, the Army took a blood sample from N soldiers. A portion of each sample
was poured into a single test tube. This mixed sample was then tested for
antibodies. If none were found, all the soldiers in the group were declared
healthy. If antibodies were present, two new mixed samples were prepared, one
from soldiers 1 through N/2 and one from the rest. The process was repeated
recursively until the infected soldiers were determined.
For the computerized version of this
algorithm (Capetanakis, 1979), it is convenient to think of the stations as the
leaves of a binary tree, as illustrated in Fig. 4-9. In the first contention slot following
a successful frame transmission, slot 0, all stations are permitted to try to
acquire the channel. If one of them does so, fine. If there is a collision,
then during slot 1 only those stations falling under node 2 in the tree may
compete. If one of them acquires the channel, the slot following the frame is
reserved for those stations under node 3. If, on the other hand, two or more
stations under node 2 want to transmit, there will be a collision during slot
1, in which case it is node 4's turn during slot 2.
In essence, if a collision occurs
during slot 0, the entire tree is searched, depth first, to locate all ready
stations. Each bit slot is associated with some particular node in the tree. If
a collision occurs, the search continues recursively with the node's left and
right children. If a bit slot is idle or if only one station transmits in it,
the searching of its node can stop because all ready stations have been
located. (Were there more than one, there would have been a collision.)
When the load on the system is
heavy, it is hardly worth the effort to dedicate slot 0 to node 1, because that
makes sense only in the unlikely event that precisely one station has a frame
to send. Similarly, one could argue that nodes 2 and 3 should be skipped as
well for the same reason. Put in more general terms, at what level in the tree
should the search begin? Clearly, the heavier the load, the farther down the
tree the search should begin. We will assume that each station has a good
estimate of the number of ready stations, q, for example, from monitoring
recent traffic.
To proceed, let us number the levels
of the tree from the top, with node 1 in Fig. 4-9 at level 0, nodes 2 and 3 at level 1,
etc. Notice that each node at level i has a fraction 2-i of the
stations below it. If the q ready stations are uniformly distributed, the
expected number of them below a specific node at level i is just 2-iq.
Intuitively, we would expect the optimal level to begin searching the tree as
the one at which the mean number of contending stations per slot is 1, that is,
the level at which 2-iq = 1. Solving this equation, we find that i =
log2 q.
Numerous improvements to the basic
algorithm have been discovered and are discussed in some detail by Bertsekas
and Gallager (1992). For example, consider the case of stations G and H being
the only ones wanting to transmit. At node 1 a collision will occur, so 2 will
be tried and discovered idle. It is pointless to probe node 3 since it is
guaranteed to have a collision (we know that two or more stations under 1 are
ready and none of them are under 2, so they must all be under 3). The probe of
3 can be skipped and 6 tried next. When this probe also turns up nothing, 7 can
be skipped and node G tried next.
A different approach to channel
allocation is to divide the channel into subchannels using FDM, TDM, or both,
and dynamically allocate them as needed. Schemes like this are commonly used on
fiber optic LANs to permit different conversations to use different wavelengths
(i.e., frequencies) at the same time. In this section we will examine one such
protocol (Humblet et al., 1992).
A simple way to build an all-optical
LAN is to use a passive star coupler (see Fig. 2-10). In effect, two fibers from each
station are fused to a glass cylinder. One fiber is for output to the cylinder
and one is for input from the cylinder. Light output by any station illuminates
the cylinder and can be detected by all the other stations. Passive stars can
handle hundreds of stations.
To allow multiple transmissions at
the same time, the spectrum is divided into channels (wavelength bands), as
shown in Fig. 2-31. In this protocol, WDMA (Wavelength
Division Multiple Access), each station is assigned two channels. A narrow
channel is provided as a control channel to signal the station, and a wide
channel is provided so the station can output data frames.
Each channel is divided into groups
of time slots, as shown in Fig. 4-10. Let us call the number of slots in the
control channel m and the number of slots in the data channel n + 1, where n of
these are for data and the last one is used by the station to report on its
status (mainly, which slots on both channels are free). On both channels, the
sequence of slots repeats endlessly, with slot 0 being marked in a special way
so latecomers can detect it. All channels are synchronized by a single global
clock.
The protocol supports three traffic
classes : (1) constant data rate connection-oriented traffic, such as uncompressed
video, (2) variable data rate connection-oriented traffic, such as file
transfer, and (3) datagram traffic, such as UDP packets. For the two
connection-oriented protocols, the idea is that for A to communicate with B, it
must first insert a CONNECTION REQUEST frame in a free slot on B's control
channel. If B accepts, communication can take place on A's data channel.
Each station has two transmitters
and two receivers, as follows:
- A fixed-wavelength receiver for listening to its own control channel.
- A tunable transmitter for sending on other stations' control channels.
- A fixed-wavelength transmitter for outputting data frames.
- A tunable receiver for selecting a data transmitter to listen to.
In other words, every station
listens to its own control channel for incoming requests but has to tune to the
transmitter's wavelength to get the data. Wavelength tuning is done by a
Fabry-Perot or Mach-Zehnder interferometer that filters out all wavelengths
except the desired wavelength band.
Let us now consider how station A
sets up a class 2 communication channel with station B for, say, file transfer.
First, A tunes its data receiver to B's data channel and waits for the status
slot. This slot tells which control slots are currently assigned and which are
free. In Fig. 4-10, for example, we see that of B's eight
control slots, 0, 4, and 5 are free. The rest are occupied (indicated by
crosses).
A picks one of the free control
slots, say, 4, and inserts its CONNECTION REQUEST message there. Since B
constantly monitors its control channel, it sees the request and grants it by
assigning slot 4 to A. This assignment is announced in the status slot of B's
data channel. When A sees the announcement, it knows it has a unidirectional
connection. If A asked for a two-way connection, B now repeats the same
algorithm with A.
It is possible that at the same time
A tried to grab B's control slot 4, C did the same thing. Neither will get it,
and both will notice the failure by monitoring the status slot in B's control
channel. They now each wait a random amount of time and try again later.
At this point, each party has a
conflict-free way to send short control messages to the other one. To perform
the file transfer, A now sends B a control message saying, for example,
''Please watch my next data output slot 3. There is a data frame for you in
it.'' When B gets the control message, it tunes its receiver to A's output
channel to read the data frame. Depending on the higher-layer protocol, B can
use the same mechanism to send back an acknowledgement if it wishes.
Note that a problem arises if both A
and C have connections to B and each of them suddenly tells B to look at slot
3. B will pick one of these requests at random, and the other transmission will
be lost.
For constant rate traffic, a
variation of this protocol is used. When A asks for a connection, it
simultaneously says something like: Is it all right if I send you a frame in
every occurrence of slot 3? If B is able to accept (i.e., has no previous
commitment for slot 3), a guaranteed bandwidth connection is established. If
not, A can try again with a different proposal, depending on which output slots
it has free.
Class 3 (datagram) traffic uses
still another variation. Instead of writing a CONNECTION REQUEST message into
the control slot it just found (4), it writes a DATA FOR YOU IN SLOT 3 message.
If B is free during the next data slot 3, the transmission will succeed.
Otherwise, the data frame is lost. In this manner, no connections are ever
needed.
Several variants of the protocol are
possible. For example, instead of each station having its own control channel,
a single control channel can be shared by all stations. Each station is
assigned a block of slots in each group, effectively multiplexing multiple
virtual channels onto one physical one.
It is also possible to make do with
a single tunable transmitter and a single tunable receiver per station by
having each station's channel be divided into m control slots followed by n + 1
data slots. The disadvantage here is that senders have to wait longer to
capture a control slot and consecutive data frames are farther apart because
some control information is in the way.
Numerous other WDMA protocols have
been proposed and implemented, differing in various details. Some have only one
control channel; others have multiple control channels. Some take propagation
delay into account; others do not. Some make tuning time an explicit part of
the model; others ignore it. The protocols also differ in terms of processing
complexity, throughput, and scalability. When a large number of frequencies are
being used, the system is sometimes called DWDM (Dense Wavelength Division
Multiplexing). For more information see (Bogineni et al., 1993; Chen, 1994;
Goralski, 2001; Kartalopoulos, 1999; and Levine and Akyildiz, 1995).
As the number of mobile computing
and communication devices grows, so does the demand to connect them to the
outside world. Even the very first mobile telephones had the ability to connect
to other telephones. The first portable computers did not have this capability,
but soon afterward, modems became commonplace on notebook computers. To go
on-line, these computers had to be plugged into a telephone wall socket.
Requiring a wired connection to the fixed network meant that the computers were
portable, but not mobile.
To achieve true mobility, notebook
computers need to use radio (or infrared) signals for communication. In this
manner, dedicated users can read and send e-mail while hiking or boating. A
system of notebook computers that communicate by radio can be regarded as a
wireless LAN, as we discussed in Sec. 1.5.4. These LANs have somewhat different
properties than conventional LANs and require special MAC sublayer protocols.
In this section we will examine some of these protocols. More information about
wireless LANs can be found in (Geier, 2002; and O'Hara and Petrick, 1999).
A common configuration for a
wireless LAN is an office building with base stations (also called access
points) strategically placed around the building. Unlike cellular telephone
systems, each cell has only one channel, covering the entire available
bandwidth and covering all the stations in its cell. Typically, its bandwidth
is 11 to 54 Mbps.
In our discussions below, we will
make the simplifying assumption that all radio transmitters have some fixed
range. When a receiver is within range of two active transmitters, the
resulting signal will generally be garbled and useless, in other words, we will
not consider CDMA-type systems further in this discussion. It is important to
realize that in some wireless LANs, not all stations are within range of one
another, which leads to a variety of complications. Furthermore, for indoor
wireless LANs, the presence of walls between stations can have a major impact
on the effective range of each station.
A naive approach to using a wireless
LAN might be to try CSMA: just listen for other transmissions and only transmit
if no one else is doing so. The trouble is, this protocol is not really
appropriate because what matters is interference at the receiver, not at the
sender. To see the nature of the problem, consider Fig. 4-11, where four wireless stations are
illustrated. For our purposes, it does not matter which are base stations and
which are notebooks. The radio range is such that A and B are within each
other's range and can potentially interfere with one another. C can also
potentially interfere with both B and D, but not with A.
First consider what happens when A
is transmitting to B, as depicted in Fig. 4-11(a). If C senses the medium, it will not
hear A because A is out of range, and thus falsely conclude that it can
transmit to B. If C does start transmitting, it will interfere at B, wiping out
the frame from A. The problem of a station not being able to detect a potential
competitor for the medium because the competitor is too far away is called the hidden
station problem.
Now let us consider the reverse
situation: B transmitting to A, as shown in Fig. 4-11(b). If C senses the medium, it will
hear an ongoing transmission and falsely conclude that it may not send to D,
when in fact such a transmission would cause bad reception only in the zone
between B and C, where neither of the intended receivers is located. This is
called the exposed station problem.
The problem is that before starting
a transmission, a station really wants to know whether there is activity around
the receiver. CSMA merely tells it whether there is activity around the station
sensing the carrier. With a wire, all signals propagate to all stations so only
one transmission can take place at once anywhere in the system. In a system
based on short-range radio waves, multiple transmissions can occur
simultaneously if they all have different destinations and these destinations
are out of range of one another.
Another way to think about this
problem is to imagine an office building in which every employee has a wireless
notebook computer. Suppose that Linda wants to send a message to Milton.
Linda's computer senses the local environment and, detecting no activity,
starts sending. However, there may still be a collision in Milton's office
because a third party may currently be sending to him from a location so far
from Linda that her computer could not detect it.
An early protocol designed for
wireless LANs is MACA (Multiple Access with Collision Avoidance) (Karn, 1990).
The basic idea behind it is for the sender to stimulate the receiver into
outputting a short frame, so stations nearby can detect this transmission and
avoid transmitting for the duration of the upcoming (large) data frame. MACA is
illustrated in Fig. 4-12.
Let us now consider how A sends a
frame to B. A starts by sending an RTS (Request To Send) frame to B, as shown in
Fig. 4-12(a). This short frame (30 bytes)
contains the length of the data frame that will eventually follow. Then B
replies with a CTS (Clear to Send) frame, as shown in Fig. 4-12(b). The CTS frame contains the data
length (copied from the RTS frame). Upon receipt of the CTS frame, A begins
transmission.
Now let us see how stations
overhearing either of these frames react. Any station hearing the RTS is
clearly close to A and must remain silent long enough for the CTS to be
transmitted back to A without conflict. Any station hearing the CTS is clearly
close to B and must remain silent during the upcoming data transmission, whose
length it can tell by examining the CTS frame.
In Fig. 4-12, C is within range of A but not within
range of B. Therefore, it hears the RTS from A but not the CTS from B. As long
as it does not interfere with the CTS, it is free to transmit while the data
frame is being sent. In contrast, D is within range of B but not A. It does not
hear the RTS but does hear the CTS. Hearing the CTS tips it off that it is
close to a station that is about to receive a frame, so it defers sending
anything until that frame is expected to be finished. Station E hears both
control messages and, like D, must be silent until the data frame is complete.
Despite these precautions,
collisions can still occur. For example, B and C could both send RTS frames to A
at the same time. These will collide and be lost. In the event of a collision,
an unsuccessful transmitter (i.e., one that does not hear a CTS within the
expected time interval) waits a random amount of time and tries again later.
The algorithm used is binary exponential backoff, which we will study when we
come to Ethernet.
Based on
simulation studies of MACA, Bharghavan et al. (1994) fine tuned MACA to improve
its performance and renamed their new protocol MACAW (MACA for Wireless). To
start with, they noticed that without data link layer acknowledgements, lost
frames were not retransmitted until the transport layer noticed their absence,
much later. They solved this problem by introducing an ACK frame after each
successful data frame. They also observed that CSMA has some use, namely, to
keep a station from transmitting an RTS at the same time another nearby station
is also doing so to the same destination, so carrier sensing was added. In
addition, they decided to run the backoff algorithm separately for each data
stream (source-destination pair), rather than for each station. This change
improves the fairness of the protocol. Finally, they added a mechanism for
stations to exchange information about congestion and a way to make the backoff
algorithm react less violently to temporary problems, to improve system
performance.
No comments:
Post a Comment
silahkan membaca dan berkomentar