2.5
The Public Switched Telephone Network
When two computers owned by the same
company or organization and located close to each other need to communicate, it
is often easiest just to run a cable between them. LANs work this way. However,
when the distances are large or there are many computers or the cables have to
pass through a public road or other public right of way, the costs of running
private cables are usually prohibitive. Furthermore, in just about every
country in the world, stringing private transmission lines across (or
underneath) public property is also illegal. Consequently, the network
designers must rely on the existing telecommunication facilities.
These facilities, especially the PSTN
(Public Switched Telephone Network), were usually designed many years ago, with
a completely different goal in mind: transmitting the human voice in a
more-or-less recognizable form. Their suitability for use in computer-computer
communication is often marginal at best, but the situation is rapidly changing
with the introduction of fiber optics and digital technology. In any event, the
telephone system is so tightly intertwined with (wide area) computer networks,
that it is worth devoting some time to studying it.
To see the order of magnitude of the
problem, let us make a rough but illustrative comparison of the properties of a
typical computer-computer connection via a local cable and via a dial-up
telephone line. A cable running between two computers can transfer data at 109
bps, maybe more. In contrast, a dial-up line has a maximum data rate of 56
kbps, a difference of a factor of almost 20,000. That is the difference between
a duck waddling leisurely through the grass and a rocket to the moon. If the
dial-up line is replaced by an ADSL connection, there is still a factor of
1000–2000 difference.
The trouble, of course, is that
computer systems designers are used to working with computer systems and when
suddenly confronted with another system whose performance (from their point of
view) is 3 or 4 orders of magnitude worse, they, not surprising, devoted much
time and effort to trying to figure out how to use it efficiently. In the
following sections we will describe the telephone system and show how it works.
For additional information about the innards of the telephone system see
(Bellamy, 2000).
Soon after Alexander Graham Bell
patented the telephone in 1876 (just a few hours ahead of his rival, Elisha
Gray), there was an enormous demand for his new invention. The initial market
was for the sale of telephones, which came in pairs. It was up to the customer
to string a single wire between them. The electrons returned through the earth.
If a telephone owner wanted to talk to n other telephone owners, separate wires
had to be strung to all n houses. Within a year, the cities were covered with
wires passing over houses and trees in a wild jumble. It became immediately
obvious that the model of connecting every telephone to every other telephone,
as shown in Fig. 2-20(a), was not going to work.
To his credit, Bell saw this and
formed the Bell Telephone Company, which opened its first switching office (in
New Haven, Connecticut) in 1878. The company ran a wire to each customer's
house or office. To make a call, the customer would crank the phone to make a
ringing sound in the telephone company office to attract the attention of an
operator, who would then manually connect the caller to the callee by using a
jumper cable. The model of a single switching office is illustrated in Fig. 2-20(b).
Pretty soon, Bell System switching
offices were springing up everywhere and people wanted to make long-distance calls
between cities, so the Bell system began to connect the switching offices. The
original problem soon returned: to connect every switching office to every
other switching office by means of a wire between them quickly became
unmanageable, so second-level switching offices were invented. After a while,
multiple second-level offices were needed, as illustrated in Fig. 2-20(c). Eventually, the hierarchy grew to
five levels.
By 1890, the three major parts of
the telephone system were in place: the switching offices, the wires between
the customers and the switching offices (by now balanced, insulated, twisted
pairs instead of open wires with an earth return), and the long-distance
connections between the switching offices. While there have been improvements
in all three areas since then, the basic Bell System model has remained
essentially intact for over 100 years. For a short technical history of the
telephone system, see (Hawley, 1991).
Prior to the 1984 breakup of
AT&T, the telephone system was organized as a highly-redundant, multilevel
hierarchy. The following description is highly simplified but gives the
essential flavor nevertheless. Each telephone has two copper wires coming out
of it that go directly to the telephone company's nearest end office (also
called a local central office). The distance is typically 1 to 10 km, being
shorter in cities than in rural areas. In the United States alone there are
about 22,000 end offices. The two-wire connections between each subscriber's
telephone and the end office are known in the trade as the local loop. If the
world's local loops were stretched out end to end, they would extend to the
moon and back 1000 times.
At one time, 80 percent of
AT&T's capital value was the copper in the local loops. AT&T was then,
in effect, the world's largest copper mine. Fortunately, this fact was not
widely known in the investment community. Had it been known, some corporate
raider might have bought AT&T, terminated all telephone service in the
United States, ripped out all the wire, and sold the wire to a copper refiner
to get a quick payback.
If a subscriber attached to a given
end office calls another subscriber attached to the same end office, the
switching mechanism within the office sets up a direct electrical connection
between the two local loops. This connection remains intact for the duration of
the call.
If the called telephone is attached
to another end office, a different procedure has to be used. Each end office
has a number of outgoing lines to one or more nearby switching centers, called toll
offices (or if they are within the same local area, tandem offices). These
lines are called toll connecting trunks. If both the caller's and callee's end
offices happen to have a toll connecting trunk to the same toll office (a
likely occurrence if they are relatively close by), the connection may be
established within the toll office. A telephone network consisting only of
telephones (the small dots), end offices (the large dots), and toll offices
(the squares) is shown in Fig. 2-20(c).
If the caller and callee do not have
a toll office in common, the path will have to be established somewhere higher
up in the hierarchy. Primary, sectional, and regional offices form a network by
which the toll offices are connected. The toll, primary, sectional, and
regional exchanges communicate with each other via high-bandwidth intertoll
trunks (also called interoffice trunks). The number of different kinds of
switching centers and their topology (e.g., can two sectional offices have a
direct connection or must they go through a regional office?) varies from
country to country depending on the country's telephone density. Figure 2-21 shows how a medium-distance
connection might be routed.
A variety of transmission media are
used for telecommunication. Local loops consist of category 3 twisted pairs
nowadays, although in the early days of telephony, uninsulated wires spaced 25
cm apart on telephone poles were common. Between switching offices, coaxial
cables, microwaves, and especially fiber optics are widely used.
In the past, transmission throughout
the telephone system was analog, with the actual voice signal being transmitted
as an electrical voltage from source to destination. With the advent of fiber
optics, digital electronics, and computers, all the trunks and switches are now
digital, leaving the local loop as the last piece of analog technology in the
system. Digital transmission is preferred because it is not necessary to
accurately reproduce an analog waveform after it has passed through many
amplifiers on a long call. Being able to correctly distinguish a 0 from a 1 is
enough. This property makes digital transmission more reliable than analog. It
is also cheaper and easier to maintain.
In summary, the telephone system
consists of three major components:
- Local loops (analog twisted pairs going into houses and businesses).
- Trunks (digital fiber optics connecting the switching offices).
- Switching offices (where calls are moved from one trunk to another).
After a short digression on the
politics of telephones, we will come back to each of these three components in
some detail. The local loops provide everyone access to the whole system, so
they are critical. Unfortunately, they are also the weakest link in the system.
For the long-haul trunks, the main issue is how to collect multiple calls
together and send them out over the same fiber. This subject is called
multiplexing, and we will study three different ways to do it. Finally, there
are two fundamentally different ways of doing switching; we will look at both.
For decades prior to 1984, the Bell
System provided both local and long distance service throughout most of the
United States. In the 1970s, the U.S. Federal Government came to believe that
this was an illegal monopoly and sued to break it up. The government won, and
on January 1, 1984, AT&T was broken up into AT&T Long Lines, 23 BOCs (Bell
Operating Companies), and a few other pieces. The 23 BOCs were grouped into
seven regional BOCs (RBOCs) to make them economically viable. The entire nature
of telecommunication in the United States was changed overnight by court order
(not by an act of Congress).
The exact details of the divestiture
were described in the so-called MFJ (Modified Final Judgment, an oxymoron if
ever there was one—if the judgment could be modified, it clearly was not
final). This event led to increased competition, better service, and lower long
distance prices to consumers and businesses. However, prices for local service
rose as the cross subsidies from long-distance calling were eliminated and
local service had to become self supporting. Many other countries have now
introduced competition along similar lines.
To make it clear who could do what,
the United States was divided up into 164 LATAs (Local Access and Transport Areas).
Very roughly, a LATA is about as big as the area covered by one area code.
Within a LATA, there was one LEC (Local Exchange Carrier) that had a monopoly
on traditional telephone service within its area. The most important LECs were
the BOCs, although some LATAs contained one or more of the 1500 independent
telephone companies operating as LECs.
All inter-LATA traffic was handled
by a different kind of company, an IXC (IntereXchange Carrier). Originally,
AT&T Long Lines was the only serious IXC, but now WorldCom and Sprint are
well-established competitors in the IXC business. One of the concerns at the
breakup was to ensure that all the IXCs would be treated equally in terms of
line quality, tariffs, and the number of digits their customers would have to
dial to use them. The way this is handled is illustrated in Fig. 2-22. Here we see three example LATAs, each
with several end offices. LATAs 2 and 3 also have a small hierarchy with tandem
offices (intra-LATA toll offices).
Figure 2-22. The relationship of
LATAs, LECs, and IXCs. All the circles are LEC switching offices. Each hexagon
belongs to the IXC whose number is in it.
Any IXC that wishes to handle calls
originating in a LATA can build a switching office called a POP (Point of
Presence) there. The LEC is required to connect each IXC to every end office,
either directly, as in LATAs 1 and 3, or indirectly, as in LATA 2. Furthermore,
the terms of the connection, both technical and financial, must be identical
for all IXCs. In this way, a subscriber in, say, LATA 1, can choose which IXC
to use for calling subscribers in LATA 3.
As part of the MFJ, the IXCs were
forbidden to offer local telephone service and the LECs were forbidden to offer
inter-LATA telephone service, although both were free to enter any other
business, such as operating fried chicken restaurants. In 1984, that was a
fairly unambiguous statement. Unfortunately, technology has a funny way of
making the law obsolete. Neither cable television nor mobile phones were
covered by the agreement. As cable television went from one way to two way and
mobile phones exploded in popularity, both LECs and IXCs began buying up or
merging with cable and mobile operators.
By 1995, Congress saw that trying to
maintain a distinction between the various kinds of companies was no longer
tenable and drafted a bill to allow cable TV companies, local telephone
companies, long-distance carriers, and mobile operators to enter one another's
businesses. The idea was that any company could then offer its customers a
single integrated package containing cable TV, telephone, and information
services and that different companies would compete on service and price. The
bill was enacted into law in February 1996. As a result, some BOCs became IXCs
and some other companies, such as cable television operators, began offering
local telephone service in competition with the LECs.
One interesting property of the 1996
law is the requirement that LECs implement local number portability. This means
that a customer can change local telephone companies without having to get a
new telephone number. This provision removes a huge hurdle for many people and
makes them much more inclined to switch LECs, thus increasing competition. As a
result, the U.S. telecommunications landscape is currently undergoing a radical
restructuring. Again, many other countries are starting to follow suit. Often
other countries wait to see how this kind of experiment works out in the U.S.
If it works well, they do the same thing; if it works badly, they try something
else.
It is now time to start our detailed
study of how the telephone system works. The main parts of the system are
illustrated in Fig. 2-23. Here we see the local loops, the
trunks, and the toll offices and end offices, both of which contain switching
equipment that switches calls. An end office has up to 10,000 local loops (in
the U.S. and other large countries). In fact, until recently, the area code +
exchange indicated the end office, so (212) 601-xxxx was a specific end office
with 10,000 subscribers, numbered 0000 through 9999. With the advent of
competition for local service, this system was no longer tenable because
multiple companies wanted to own the end office code. Also, the number of codes
was basically used up, so complex mapping schemes had to be introduced.
Figure 2-23. The use of both analog
and digital transmission for a computer to computer call. Conversion is done by
the modems and codecs.
Let us begin with the part that most
people are familiar with: the two-wire local loop coming from a telephone
company end office into houses and small businesses. The local loop is also
frequently referred to as the ''last mile,'' although the length can be up to
several miles. It has used analog signaling for over 100 years and is likely to
continue doing so for some years to come, due to the high cost of converting to
digital. Nevertheless, even in this last bastion of analog transmission, change
is taking place. In this section we will study the traditional local loop and
the new developments taking place here, with particular emphasis on data
communication from home computers.
When a computer wishes to send
digital data over an analog dial-up line, the data must first be converted to
analog form for transmission over the local loop. This conversion is done by a
device called a modem, something we will study shortly. At the telephone
company end office the data are converted to digital form for transmission over
the long-haul trunks.
If the other end is a computer with
a modem, the reverse conversion—digital to analog—is needed to traverse the
local loop at the destination. This arrangement is shown in Fig. 2-23 for ISP 1 (Internet Service Provider),
which has a bank of modems, each connected to a different local loop. This ISP
can handle as many connections as it has modems (assuming its server or servers
have enough computing power). This arrangement was the normal one until 56-kbps
modems appeared, for reasons that will become apparent shortly.
Analog signaling consists of varying
a voltage with time to represent an information stream. If transmission media
were perfect, the receiver would receive exactly the same signal that the
transmitter sent. Unfortunately, media are not perfect, so the received signal
is not the same as the transmitted signal. For digital data, this difference
can lead to errors.
Transmission lines suffer from three
major problems: attenuation, delay distortion, and noise. Attenuation is the
loss of energy as the signal propagates outward. The loss is expressed in
decibels per kilometer. The amount of energy lost depends on the frequency. To
see the effect of this frequency dependence, imagine a signal not as a simple
waveform, but as a series of Fourier components. Each component is attenuated
by a different amount, which results in a different Fourier spectrum at the
receiver.
To make things worse, the different
Fourier components also propagate at different speeds in the wire. This speed difference
leads to distortion of the signal received at the other end.
Another problem is noise, which is
unwanted energy from sources other than the transmitter. Thermal noise is
caused by the random motion of the electrons in a wire and is unavoidable. Crosstalk
is caused by inductive coupling between two wires that are close to each other.
Sometimes when talking on the telephone, you can hear another conversation in
the background. That is crosstalk. Finally, there is impulse noise, caused by
spikes on the power line or other causes. For digital data, impulse noise can
wipe out one or more bits.
Due to the problems just discussed,
especially the fact that both attenuation and propagation speed are frequency
dependent, it is undesirable to have a wide range of frequencies in the signal.
Unfortunately, the square waves used in digital signals have a wide frequency
spectrum and thus are subject to strong attenuation and delay distortion. These
effects make baseband (DC) signaling unsuitable except at slow speeds and over
short distances.
To get around the problems
associated with DC signaling, especially on telephone lines, AC signaling is
used. A continuous tone in the 1000 to 2000-Hz range, called a sine wave
carrier, is introduced. Its amplitude, frequency, or phase can be modulated to
transmit information. In amplitude modulation, two different amplitudes are
used to represent 0 and 1, respectively. In frequency modulation, also known as
frequency shift keying, two (or more) different tones are used. (The term keying
is also widely used in the industry as a synonym for modulation.) In the
simplest form of phase modulation, the carrier wave is systematically shifted 0
or 180 degrees at uniformly spaced intervals. A better scheme is to use shifts
of 45, 135, 225, or 315 degrees to transmit 2 bits of information per time
interval. Also, always requiring a phase shift at the end of every time
interval, makes it is easier for the receiver to recognize the boundaries of
the time intervals.
Figure 2-24 illustrates the three forms of
modulation. In Fig. 2-24(a) one of the amplitudes is nonzero and
one is zero. In Fig. 2-24(b) two frequencies are used. In Fig. 2-24(c) a phase shift is either present or
absent at each bit boundary. A device that accepts a serial stream of bits as
input and produces a carrier modulated by one (or more) of these methods (or
vice versa) is called a modem (for modulator-demodulator). The modem is
inserted between the (digital) computer and the (analog) telephone system.
Figure 2-24. (a) A binary signal.
(b) Amplitude modulation. (c) Frequency modulation. (d) Phase modulation.
To go to higher and higher speeds,
it is not possible to just keep increasing the sampling rate. The Nyquist theorem
says that even with a perfect 3000-Hz line (which a dial-up telephone is
decidedly not), there is no point in sampling faster than 6000 Hz. In practice,
most modems sample 2400 times/sec and focus on getting more bits per sample.
The number of samples per second is
measured in baud. During each baud, one symbol is sent. Thus, an n-baud line
transmits n symbols/sec. For example, a 2400-baud line sends one symbol about
every 416.667 µsec. If the symbol consists of 0 volts for a logical 0 and 1
volt for a logical 1, the bit rate is 2400 bps. If, however, the voltages 0, 1,
2, and 3 volts are used, every symbol consists of 2 bits, so a 2400-baud line
can transmit 2400 symbols/sec at a data rate of 4800 bps. Similarly, with four
possible phase shifts, there are also 2 bits/symbol, so again here the bit rate
is twice the baud rate. The latter technique is widely used and called QPSK (Quadrature
Phase Shift Keying).
The concepts of bandwidth, baud,
symbol, and bit rate are commonly confused, so let us restate them here. The
bandwidth of a medium is the range of frequencies that pass through it with
minimum attenuation. It is a physical property of the medium (usually from 0 to
some maximum frequency) and measured in Hz. The baud rate is the number of
samples/sec made. Each sample sends one piece of information, that is, one
symbol. The baud rate and symbol rate are thus the same. The modulation
technique (e.g., QPSK) determines the number of bits/symbol. The bit rate is
the amount of information sent over the channel and is equal to the number of
symbols/sec times the number of bits/symbol.
All advanced modems use a
combination of modulation techniques to transmit multiple bits per baud. Often
multiple amplitudes and multiple phase shifts are combined to transmit several
bits/symbol. In Fig. 2-25(a), we see dots at 45, 135, 225, and
315 degrees with constant amplitude (distance from the origin). The phase of a
dot is indicated by the angle a line from it to the origin makes with the
positive x-axis. Fig. 2-25(a) has four valid combinations and can
be used to transmit 2 bits per symbol. It is QPSK.
In Fig. 2-25(b) we see a different modulation
scheme, in which four amplitudes and four phases are used, for a total of 16
different combinations. This modulation scheme can be used to transmit 4 bits
per symbol. It is called QAM-16 (Quadrature Amplitude Modulation). Sometimes
the term 16-QAM is used instead. QAM-16 can be used, for example, to transmit
9600 bps over a 2400-baud line.
Figure 2-25(c) is yet another modulation scheme
involving amplitude and phase. It allows 64 different combinations, so 6 bits
can be transmitted per symbol. It is called QAM-64. Higher-order QAMs also are
used.
Diagrams such as those of Fig. 2-25, which show the legal combinations of
amplitude and phase, are called constellation diagrams. Each high-speed modem
standard has its own constellation pattern and can talk only to other modems
that use the same one (although most modems can emulate all the slower ones).
With many points in the
constellation pattern, even a small amount of noise in the detected amplitude
or phase can result in an error and, potentially, many bad bits. To reduce the
chance of an error, standards for the higher speeds modems do error correction
by adding extra bits to each sample. The schemes are known as TCM (Trellis
Coded Modulation). Thus, for example, the V.32 modem standard uses 32
constellation points to transmit 4 data bits and 1 parity bit per symbol at
2400 baud to achieve 9600 bps with error correction. Its constellation pattern
is shown in Fig. 2-26(a). The decision to ''rotate'' around
the origin by 45 degrees was done for engineering reasons; the rotated and
unrotated constellations have the same information capacity.
The next step above 9600 bps is
14,400 bps. It is called V.32 bis. This speed is achieved by transmitting 6
data bits and 1 parity bit per sample at 2400 baud. Its constellation pattern
has 128 points when QAM-128 is used and is shown in Fig. 2-26(b). Fax modems use this speed to
transmit pages that have been scanned in as bit maps. QAM-256 is not used in
any standard telephone modems, but it is used on cable networks, as we shall
see.
The next telephone modem after V.32
bis is V.34, which runs at 28,800 bps at 2400 baud with 12 data bits/symbol.
The final modem in this series is V.34 bis which uses 14 data bits/symbol at
2400 baud to achieve 33,600 bps.
To increase the effective data rate
further, many modems compress the data before transmitting it, to get an
effective data rate higher than 33,600 bps. On the other hand, nearly all
modems test the line before starting to transmit user data, and if they find
the quality lacking, cut back to a speed lower than the rated maximum. Thus,
the effective modem speed observed by the user can be lower, equal to, or
higher than the official rating.
All modern modems allow traffic in
both directions at the same time (by using different frequencies for different
directions). A connection that allows traffic in both directions simultaneously
is called full duplex. A two-lane road is full duplex. A connection that allows
traffic either way, but only one way at a time is called half duplex. A single
railroad track is half duplex. A connection that allows traffic only one way is
called simplex. A one-way street is simplex. Another example of a simplex
connection is an optical fiber with a laser on one end and a light detector on
the other end.
The reason that standard modems stop
at 33,600 is that the Shannon limit for the telephone system is about 35 kbps,
so going faster than this would violate the laws of physics (department of
thermodynamics). To find out whether 56-kbps modems are theoretically possible,
stay tuned.
But why is the theoretical limit 35
kbps? It has to do with the average length of the local loops and the quality
of these lines. The 35 kbps is determined by the average length of the local
loops. In Fig. 2-23, a call originating at the computer on
the left and terminating at ISP 1 goes over two local loops as an analog
signal, once at the source and once at the destination. Each of these adds
noise to the signal. If we could get rid of one of these local loops, the
maximum rate would be doubled.
ISP 2 does precisely that. It has a
pure digital feed from the nearest end office. The digital signal used on the
trunks is fed directly to ISP 2, eliminating the codecs, modems, and analog
transmission on its end. Thus, when one end of the connection is purely
digital, as it is with most ISPs now, the maximum data rate can be as high as
70 kbps. Between two home users with modems and analog lines, the maximum is
33.6 kbps.
The reason that 56 kbps modems are
in use has to do with the Nyquist theorem. The telephone channel is about 4000
Hz wide (including the guard bands). The maximum number of independent samples
per second is thus 8000. The number of bits per sample in the U.S. is 8, one of
which is used for control purposes, allowing 56,000 bit/sec of user data. In
Europe, all 8 bits are available to users, so 64,000-bit/sec modems could have
been used, but to get international agreement on a standard, 56,000 was chosen.
This modem standard is called V.90.
It provides for a 33.6-kbps upstream channel (user to ISP), but a 56 kbps
downstream channel (ISP to user) because there is usually more data transport from
the ISP to the user than the other way (e.g., requesting a Web page takes only
a few bytes, but the actual page could be megabytes). In theory, an upstream
channel wider than 33.6 kbps would have been possible, but since many local
loops are too noisy for even 33.6 kbps, it was decided to allocate more of the
bandwidth to the downstream channel to increase the chances of it actually
working at 56 kbps.
The next step beyond V.90 is V.92.
These modems are capable of 48 kbps on the upstream channel if the line can
handle it. They also determine the appropriate speed to use in about half of
the usual 30 seconds required by older modems. Finally, they allow an incoming
telephone call to interrupt an Internet session, provided that the line has
call waiting service.
When the telephone industry finally
got to 56 kbps, it patted itself on the back for a job well done. Meanwhile,
the cable TV industry was offering speeds up to 10 Mbps on shared cables, and
satellite companies were planning to offer upward of 50 Mbps. As Internet
access became an increasingly important part of their business, the telephone
companies (LECs) began to realize they needed a more competitive product. Their
answer was to start offering new digital services over the local loop. Services
with more bandwidth than standard telephone service are sometimes called broadband,
although the term really is more of a marketing concept than a specific
technical concept.
Initially, there were many
overlapping offerings, all under the general name of xDSL (Digital Subscriber
Line), for various x. Below we will discuss these but primarily focus on what
is probably going to become the most popular of these services, ADSL (Asymmetric
DSL). Since ADSL is still being developed and not all the standards are fully
in place, some of the details given below may change in time, but the basic
picture should remain valid. For more information about ADSL, see (Summers,
1999; and Vetter et al., 2000).
The reason that modems are so slow
is that telephones were invented for carrying the human voice and the entire
system has been carefully optimized for this purpose. Data have always been
stepchildren. At the point where each local loop terminates in the end office,
the wire runs through a filter that attenuates all frequencies below 300 Hz and
above 3400 Hz. The cutoff is not sharp—300 Hz and 3400 Hz are the 3 dB
points—so the bandwidth is usually quoted as 4000 Hz even though the distance
between the 3 dB points is 3100 Hz. Data are thus also restricted to this
narrow band.
The trick that makes xDSL work is
that when a customer subscribes to it, the incoming line is connected to a
different kind of switch, one that does not have this filter, thus making the
entire capacity of the local loop available. The limiting factor then becomes
the physics of the local loop, not the artificial 3100 Hz bandwidth created by
the filter.
Unfortunately, the capacity of the
local loop depends on several factors, including its length, thickness, and
general quality. A plot of the potential bandwidth as a function of distance is
given in Fig. 2-27. This figure assumes that all the other
factors are optimal (new wires, modest bundles, etc.).
The implication of this figure
creates a problem for the telephone company. When it picks a speed to offer, it
is simultaneously picking a radius from its end offices beyond which the
service cannot be offered. This means that when distant customers try to sign
up for the service, they may be told ''Thanks a lot for your interest, but you
live 100 meters too far from the nearest end office to get the service. Could
you please move?'' The lower the chosen speed, the larger the radius and the
more customers covered. But the lower the speed, the less attractive the
service and the fewer the people who will be willing to pay for it. This is
where business meets technology. (One potential solution is building mini end
offices out in the neighborhoods, but that is an expensive proposition.)
The xDSL services have all been
designed with certain goals in mind. First, the services must work over the
existing category 3 twisted pair local loops. Second, they must not affect
customers' existing telephones and fax machines. Third, they must be much
faster than 56 kbps. Fourth, they should be always on, with just a monthly
charge but no per-minute charge.
The initial ADSL offering was from
AT&T and worked by dividing the spectrum available on the local loop, which
is about 1.1 MHz, into three frequency bands: POTS (Plain Old Telephone Service)
upstream (user to end office) and downstream (end office to user). The
technique of having multiple frequency bands is called frequency division
multiplexing; we will study it in detail in a later section. Subsequent
offerings from other providers have taken a different approach, and it appears
this one is likely to win out, so we will describe it below.
The alternative approach, called DMT
(Discrete MultiTone), is illustrated in Fig. 2-28. In effect, what it does is divide the
available 1.1 MHz spectrum on the local loop into 256 independent channels of
4312.5 Hz each. Channel 0 is used for POTS. Channels 1–5 are not used, to keep
the voice signal and data signals from interfering with each other. Of the
remaining 250 channels, one is used for upstream control and one is used for
downstream control. The rest are available for user data.
In principle, each of the remaining
channels can be used for a full-duplex data stream, but harmonics, crosstalk,
and other effects keep practical systems well below the theoretical limit. It
is up to the provider to determine how many channels are used for upstream and
how many for downstream. A 50–50 mix of upstream and downstream is technically
possible, but most providers allocate something like 80%–90% of the bandwidth
to the downstream channel since most users download more data than they upload.
This choice gives rise to the ''A'' in ADSL. A common split is 32 channels for
upstream and the rest downstream. It is also possible to have a few of the
highest upstream channels be bidirectional for increased bandwidth, although
making this optimization requires adding a special circuit to cancel echoes.
The ADSL standard (ANSI T1.413 and
ITU G.992.1) allows speeds of as much as 8 Mbps downstream and 1 Mbps upstream.
However, few providers offer this speed. Typically, providers offer 512 kbps
downstream and 64 kbps upstream (standard service) and 1 Mbps downstream and
256 kbps upstream (premium service).
Within each channel, a modulation
scheme similar to V.34 is used, although the sampling rate is 4000 baud instead
of 2400 baud. The line quality in each channel is constantly monitored and the
data rate adjusted continuously as needed, so different channels may have
different data rates. The actual data are sent with QAM modulation, with up to
15 bits per baud, using a constellation diagram analogous to that of Fig. 2-25(b). With, for example, 224 downstream
channels and 15 bits/baud at 4000 baud, the downstream bandwidth is 13.44 Mbps.
In practice, the signal-to-noise ratio is never good enough to achieve this
rate, but 8 Mbps is possible on short runs over high-quality loops, which is
why the standard goes up this far.
A typical ADSL arrangement is shown
in Fig. 2-29. In this scheme, a telephone company
technician must install a NID (Network Interface Device) on the customer's
premises. This small plastic box marks the end of the telephone company's
property and the start of the customer's property. Close to the NID (or
sometimes combined with it) is a splitter, an analog filter that separates the
0-4000 Hz band used by POTS from the data. The POTS signal is routed to the
existing telephone or fax machine, and the data signal is routed to an ADSL
modem. The ADSL modem is actually a digital signal processor that has been set
up to act as 250 QAM modems operating in parallel at different frequencies.
Since most current ADSL modems are external, the computer must be connected to
it at high speed. Usually, this is done by putting an Ethernet card in the
computer and operating a very short two-node Ethernet containing only the
computer and ADSL modem. Occasionally the USB port is used instead of Ethernet.
In the future, internal ADSL modem cards will no doubt become available.
At the other end of the wire, on the
end office side, a corresponding splitter is installed. Here the voice portion
of the signal is filtered out and sent to the normal voice switch. The signal
above 26 kHz is routed to a new kind of device called a DSLAM (Digital
Subscriber Line Access Multiplexer), which contains the same kind of digital
signal processor as the ADSL modem. Once the digital signal has been recovered
into a bit stream, packets are formed and sent off to the ISP.
This complete separation between the
voice system and ADSL makes it relatively easy for a telephone company to
deploy ADSL. All that is needed is buying a DSLAM and splitter and attaching
the ADSL subscribers to the splitter. Other high-bandwidth services (e.g.,
ISDN) require much greater changes to the existing switching equipment.
One disadvantage of the design of Fig. 2-29 is the presence of the NID and splitter
on the customer premises. Installing these can only be done by a telephone
company technician, necessitating an expensive ''truck roll'' (i.e., sending a
technician to the customer's premises). Therefore, an alternative splitterless
design has also been standardized. It is informally called G.lite but the ITU
standard number is G.992.2. It is the same as Fig. 2-29 but without the splitter. The existing
telephone line is used as is. The only difference is that a microfilter has to
be inserted into each telephone jack between the telephone or ADSL modem and
the wire. The microfilter for the telephone is a low-pass filter eliminating
frequencies above 3400 Hz; the microfilter for the ADSL modem is a high-pass
filter eliminating frequencies below 26 kHz. However this system is not as
reliable as having a splitter, so G.lite can be used only up to 1.5 Mbps
(versus 8 Mbps for ADSL with a splitter). G.lite still requires a splitter in
the end office, however, but that installation does not require thousands of
truck rolls.
ADSL is just a physical layer
standard. What runs on top of it depends on the carrier. Often the choice is
ATM due to ATM's ability to manage quality of service and the fact that many
telephone companies run ATM in the core network.
Since 1996 in the U.S. and a bit
later in other countries, companies that wish to compete with the entrenched
local telephone company (the former monopolist), called an ILEC (Incumbent LEC),
are free to do so. The most likely candidates are long-distance telephone
companies (IXCs). Any IXC wishing to get into the local phone business in some
city must do the following things. First, it must buy or lease a building for
its first end office in that city. Second, it must fill the end office with
telephone switches and other equipment, all of which are available as
off-the-shelf products from various vendors. Third, it must run a fiber between
the end office and its nearest toll office so the new local customers will have
access to its national network. Fourth, it must acquire customers, typically by
advertising better service or lower prices than those of the ILEC.
Then the hard part begins. Suppose
that some customers actually show up. How is the new local phone company,
called a CLEC (Competitive LEC) going to connect customer telephones and
computers to its shiny new end office? Buying the necessary rights of way and
stringing wires or fibers is prohibitively expensive. Many CLECs have
discovered a cheaper alternative to the traditional twisted-pair local loop:
the WLL (Wireless Local Loop).
In a certain sense, a fixed
telephone using a wireless local loop is a bit like a mobile phone, but there
are three crucial technical differences. First, the wireless local loop customer
often wants high-speed Internet connectivity, often at speeds at least equal to
ADSL. Second, the new customer probably does not mind having a CLEC technician
install a large directional antenna on his roof pointed at the CLEC's end
office. And thus a new industry is born: fixed wireless (local telephone and
Internet service run by CLECs over wireless local loops).
Although WLLs began serious
operation in 1998, we first have to go back to 1969 to see the origin. In that
year the FCC allocated two television channels (at 6 MHz each) for
instructional television at 2.1 GHz. In subsequent years, 31 more channels were
added at 2.5 GHz for a total of 198 MHz.
Instructional television never took
off and in 1998, the FCC took the frequencies back and allocated them to
two-way radio. They were immediately seized upon for wireless local loops. At
these frequencies, the microwaves are 10–12 cm long. They have a range of about
50 km and can penetrate vegetation and rain moderately well. The 198 MHz of new
spectrum was immediately put to use for wireless local loops as a service
called MMDS (Multichannel Multipoint Distribution Service). MMDS can be
regarded as a MAN (Metropolitan Area Network), as can its cousin LMDS
(discussed below).
The big advantage of this service is
that the technology is well established and the equipment is readily available.
The disadvantage is that the total bandwidth available is modest and must be
shared by many users over a fairly large geographic area.
The low bandwidth of MMDS led to
interest in millimeter waves as an alternative. At frequencies of 28–31 GHz in
the U.S. and 40 GHz in Europe, no frequencies were allocated because it is
difficult to build silicon integrated circuits that operate so fast. That
problem was solved with the invention of gallium arsenide integrated circuits,
opening up millimeter bands for radio communication. The FCC responded to the
demand by allocating 1.3 GHz to a new wireless local loop service called LMDS (Local
Multipoint Distribution Service). This allocation is the single largest chunk
of bandwidth ever allocated by the FCC for any one use. A similar chunk is
being allocated in Europe, but at 40 GHz.
The operation of LMDS is shown in Fig. 2-30. Here a tower is shown with multiple
antennas on it, each pointing in a different direction. Since millimeter waves
are highly directional, each antenna defines a sector, independent of the other
ones. At this frequency, the range is 2–5 km, which means that many towers are
needed to cover a city.
Like ADSL, LMDS uses an asymmetric
bandwidth allocation favoring the downstream channel. With current technology,
each sector can have 36 Gbps downstream and 1 Mbps upstream, shared among all
the users in that sector. If each active user downloads three 5-KB pages per
minute, the user is occupying an average of 2000 bps of spectrum, which allows
a maximum of 18,000 active users per sector. To keep the delay reasonable, no
more than 9000 active users should be supported, though. With four sectors, as
shown in Fig. 2-30, an active user population of 36,000
could be supported. Assuming that one in three customers is on line during peak
periods, a single tower with four antennas could serve 100,000 people within a
5-km radius of the tower. These calculations have been done by many potential
CLECs, some of whom have concluded that for a modest investment in
millimeter-wave towers, they can get into the local telephone and Internet
business and offer users data rates comparable to cable TV and at a lower
price.
LMDS has a few problems, however.
For one thing, millimeter waves propagate in straight lines, so there must be a
clear line of sight between the roof top antennas and the tower. For another,
leaves absorb these waves well, so the tower must be high enough to avoid
having trees in the line of sight. And what may have looked like a clear line
of sight in December may not be clear in July when the trees are full of
leaves. Rain also absorbs these waves. To some extent, errors introduced by
rain can be compensated for with error correcting codes or turning up the power
when it is raining. Nevertheless, LMDS service is more likely to be rolled out
first in dry climates, say, in Arizona rather than in Seattle.
Wireless local loops are not likely
to catch on unless there are standards, to encourage equipment vendors to
produce products and to ensure that customers can change CLECs without having
to buy new equipment. To provide this standardization, IEEE set up a committee
called 802.16 to draw up a standard for LMDS. The 802.16 standard was published
in April 2002. IEEE calls 802.16 a wireless MAN.
IEEE 802.16 was designed for digital
telephony, Internet access, connection of two remote LANs, television and radio
broadcasting, and other uses.
Economies of scale play an important
role in the telephone system. It costs essentially the same amount of money to
install and maintain a high-bandwidth trunk as a low-bandwidth trunk between
two switching offices (i.e., the costs come from having to dig the trench and
not from the copper wire or optical fiber). Consequently, telephone companies
have developed elaborate schemes for multiplexing many conversations over a
single physical trunk. These multiplexing schemes can be divided into two basic
categories: FDM (Frequency Division Multiplexing) and TDM (Time Division
Multiplexing). In FDM, the frequency spectrum is divided into frequency bands,
with each user having exclusive possession of some band. In TDM, the users take
turns (in a round-robin fashion), each one periodically getting the entire
bandwidth for a little burst of time.
AM radio broadcasting provides
illustrations of both kinds of multiplexing. The allocated spectrum is about 1
MHz, roughly 500 to 1500 kHz. Different frequencies are allocated to different
logical channels (stations), each operating in a portion of the spectrum, with
the interchannel separation great enough to prevent interference. This system
is an example of frequency division multiplexing. In addition (in some
countries), the individual stations have two logical subchannels: music and
advertising. These two alternate in time on the same frequency, first a burst
of music, then a burst of advertising, then more music, and so on. This
situation is time division multiplexing.
Below we will examine frequency
division multiplexing. After that we will see how FDM can be applied to fiber
optics (wavelength division multiplexing). Then we will turn to TDM, and end
with an advanced TDM system used for fiber optics (SONET).
Figure 2-31 shows how three voice-grade telephone
channels are multiplexed using FDM. Filters limit the usable bandwidth to about
3100 Hz per voice-grade channel. When many channels are multiplexed together,
4000 Hz is allocated to each channel to keep them well separated. First the
voice channels are raised in frequency, each by a different amount. Then they
can be combined because no two channels now occupy the same portion of the
spectrum. Notice that even though there are gaps (guard bands) between the
channels, there is some overlap between adjacent channels because the filters
do not have sharp edges. This overlap means that a strong spike at the edge of
one channel will be felt in the adjacent one as nonthermal noise.
Figure 2-31. Frequency division
multiplexing. (a) The original bandwidths. (b) The bandwidths raised in
frequency. (c) The multiplexed channel.
The FDM schemes used around the
world are to some degree standardized. A widespread standard is twelve 4000-Hz
voice channels multiplexed into the 60 to 108 kHz band. This unit is called a group.
The 12-kHz to 60-kHz band is sometimes used for another group. Many carriers
offer a 48- to 56-kbps leased line service to customers, based on the group.
Five groups (60 voice channels) can be multiplexed to form a supergroup. The
next unit is the mastergroup, which is five supergroups (CCITT standard) or ten
supergroups (Bell system). Other standards of up to 230,000 voice channels also
exist.
For fiber optic channels, a
variation of frequency division multiplexing is used. It is called WDM (Wavelength
Division Multiplexing). The basic principle of WDM on fibers is depicted in Fig. 2-32. Here four fibers come together at an
optical combiner, each with its energy present at a different wavelength. The
four beams are combined onto a single shared fiber for transmission to a
distant destination. At the far end, the beam is split up over as many fibers
as there were on the input side. Each output fiber contains a short,
specially-constructed core that filters out all but one wavelength. The
resulting signals can be routed to their destination or recombined in different
ways for additional multiplexed transport.
There is really nothing new here.
This is just frequency division multiplexing at very high frequencies. As long
as each channel has its own frequency (i.e., wavelength) range and all the
ranges are disjoint, they can be multiplexed together on the long-haul fiber.
The only difference with electrical FDM is that an optical system using a
diffraction grating is completely passive and thus highly reliable.
WDM technology has been progressing
at a rate that puts computer technology to shame. WDM was invented around 1990.
The first commercial systems had eight channels of 2.5 Gbps per channel. By
1998, systems with 40 channels of 2.5 Gbps were on the market. By 2001, there
were products with 96 channels of 10 Gbps, for a total of 960 Gbps. This is
enough bandwidth to transmit 30 full-length movies per second (in MPEG-2).
Systems with 200 channels are already working in the laboratory. When the
number of channels is very large and the wavelengths are spaced close together,
for example, 0.1 nm, the system is often referred to as DWDM (Dense WDM).
It should be noted that the reason
WDM is popular is that the energy on a single fiber is typically only a few
gigahertz wide because it is currently impossible to convert between electrical
and optical media any faster. By running many channels in parallel on different
wavelengths, the aggregate bandwidth is increased linearly with the number of
channels. Since the bandwidth of a single fiber band is about 25,000 GHz (see Fig. 2-6), there is theoretically room for 2500
10-Gbps channels even at 1 bit/Hz (and higher rates are also possible).
Another new development is all
optical amplifiers. Previously, every 100 km it was necessary to split up all
the channels and convert each one to an electrical signal for amplification
separately before reconverting to optical and combining them. Nowadays, all
optical amplifiers can regenerate the entire signal once every 1000 km without
the need for multiple opto-electrical conversions.
In the example of Fig. 2-32, we have a fixed wavelength system.
Bits from input fiber 1 go to output fiber 3, bits from input fiber 2 go to
output fiber 1, etc. However, it is also possible to build WDM systems that are
switched. In such a device, the output filters are tunable using Fabry-Perot or
Mach-Zehnder interferometers. For more information about WDM and its
application to Internet packet switching, see (Elmirghani and Mouftah, 2000;
Hunter and Andonovic, 2000; and Listani et al., 2001).
WDM technology is wonderful, but
there is still a lot of copper wire in the telephone system, so let us turn
back to it for a while. Although FDM is still used over copper wires or
microwave channels, it requires analog circuitry and is not amenable to being
done by a computer. In contrast, TDM can be handled entirely by digital
electronics, so it has become far more widespread in recent years.
Unfortunately, it can only be used for digital data. Since the local loops
produce analog signals, a conversion is needed from analog to digital in the
end office, where all the individual local loops come together to be combined
onto outgoing trunks.
We will now look at how multiple
analog voice signals are digitized and combined onto a single outgoing digital
trunk. Computer data sent over a modem are also analog, so the following
description also applies to them. The analog signals are digitized in the end
office by a device called a codec (coder-decoder), producing a series of 8-bit
numbers. The codec makes 8000 samples per second (125 µsec/sample) because the
Nyquist theorem says that this is sufficient to capture all the information
from the 4-kHz telephone channel bandwidth. At a lower sampling rate,
information would be lost; at a higher one, no extra information would be
gained. This technique is called PCM (Pulse Code Modulation). PCM forms the heart
of the modern telephone system. As a consequence, virtually all time intervals
within the telephone system are multiples of 125 µsec.
When digital transmission began
emerging as a feasible technology, CCITT was unable to reach agreement on an
international standard for PCM. Consequently, a variety of incompatible schemes
are now in use in different countries around the world.
The method used in North America and
Japan is the T1 carrier, depicted in Fig. 2-33. (Technically speaking, the format is
called DS1 and the carrier is called T1, but following widespread industry
tradition, we will not make that subtle distinction here.) The T1 carrier
consists of 24 voice channels multiplexed together. Usually, the analog signals
are sampled on a round-robin basis with the resulting analog stream being fed
to the codec rather than having 24 separate codecs and then merging the digital
output. Each of the 24 channels, in turn, gets to insert 8 bits into the output
stream. Seven bits are data and one is for control, yielding 7 x 8000 = 56,000
bps of data, and 1 x 8000 = 8000 bps of signaling information per channel.
A frame consists of 24 x 8 = 192
bits plus one extra bit for framing, yielding 193 bits every 125 µsec. This
gives a gross data rate of 1.544 Mbps. The 193rd bit is used for frame
synchronization. It takes on the pattern 0101010101 . . . . Normally, the
receiver keeps checking this bit to make sure that it has not lost synchronization.
If it does get out of sync, the receiver can scan for this pattern to get
resynchronized. Analog customers cannot generate the bit pattern at all because
it corresponds to a sine wave at 4000 Hz, which would be filtered out. Digital
customers can, of course, generate this pattern, but the odds are against its
being present when the frame slips. When a T1 system is being used entirely for
data, only 23 of the channels are used for data. The 24th one is used for a
special synchronization pattern, to allow faster recovery in the event that the
frame slips.
When CCITT finally did reach
agreement, they felt that 8000 bps of signaling information was far too much,
so its 1.544-Mbps standard is based on an 8- rather than a 7-bit data item;
that is, the analog signal is quantized into 256 rather than 128 discrete
levels. Two (incompatible) variations are provided. In common-channel signaling,
the extra bit (which is attached onto the rear rather than the front of the
193-bit frame) takes on the values 10101010 . . . in the odd frames and
contains signaling information for all the channels in the even frames.
In the other variation, channel-associated
signaling, each channel has its own private signaling subchannel. A private
subchannel is arranged by allocating one of the eight user bits in every sixth
frame for signaling purposes, so five out of six samples are 8 bits wide, and
the other one is only 7 bits wide. CCITT also recommended a PCM carrier at
2.048 Mbps called E1. This carrier has 32 8-bit data samples packed into the
basic 125-µsec frame. Thirty of the channels are used for information and two
are used for signaling. Each group of four frames provides 64 signaling bits,
half of which are used for channel-associated signaling and half of which are
used for frame synchronization or are reserved for each country to use as it
wishes. Outside North America and Japan, the 2.048-Mbps E1 carrier is used
instead of T1.
Once the voice signal has been
digitized, it is tempting to try to use statistical techniques to reduce the
number of bits needed per channel. These techniques are appropriate not only
for encoding speech, but for the digitization of any analog signal. All of the
compaction methods are based on the principle that the signal changes relatively
slowly compared to the sampling frequency, so that much of the information in
the 7- or 8-bit digital level is redundant.
One method, called differential
pulse code modulation, consists of outputting not the digitized amplitude, but
the difference between the current value and the previous one. Since jumps of
±16 or more on a scale of 128 are unlikely, 5 bits should suffice instead of 7.
If the signal does occasionally jump wildly, the encoding logic may require
several sampling periods to ''catch up.'' For speech, the error introduced can
be ignored.
A variation of this compaction
method requires each sampled value to differ from its predecessor by either +1
or -1. Under these conditions, a single bit can be transmitted, telling whether
the new sample is above or below the previous one. This technique, called delta
modulation, is illustrated in Fig. 2-34. Like all compaction techniques that
assume small level changes between consecutive samples, delta encoding can get
into trouble if the signal changes too fast, as shown in the figure. When this
happens, information is lost.
An improvement to differential PCM
is to extrapolate the previous few values to predict the next value and then to
encode the difference between the actual signal and the predicted one. The
transmitter and receiver must use the same prediction algorithm, of course.
Such schemes are called predictive encoding. They are useful because they
reduce the size of the numbers to be encoded, hence the number of bits to be
sent.
Time division multiplexing allows
multiple T1 carriers to be multiplexed into higher-order carriers. Figure 2-35 shows how this can be done. At the
left we see four T1 channels being multiplexed onto one T2 channel. The
multiplexing at T2 and above is done bit for bit, rather than byte for byte
with the 24 voice channels that make up a T1 frame. Four T1 streams at 1.544
Mbps should generate 6.176 Mbps, but T2 is actually 6.312 Mbps. The extra bits
are used for framing and recovery in case the carrier slips. T1 and T3 are
widely used by customers, whereas T2 and T4 are only used within the telephone
system itself, so they are not well known.
At the next level, seven T2 streams
are combined bitwise to form a T3 stream. Then six T3 streams are joined to
form a T4 stream. At each step a small amount of overhead is added for framing
and recovery in case the synchronization between sender and receiver is lost.
Just as there is little agreement on
the basic carrier between the United States and the rest of the world, there is
equally little agreement on how it is to be multiplexed into higher-bandwidth
carriers. The U.S. scheme of stepping up by 4, 7, and 6 did not strike everyone
else as the way to go, so the CCITT standard calls for multiplexing four
streams onto one stream at each level. Also, the framing and recovery data are
different between the U.S. and CCITT standards. The CCITT hierarchy for 32,
128, 512, 2048, and 8192 channels runs at speeds of 2.048, 8.848, 34.304,
139.264, and 565.148 Mbps.
In the early days of fiber optics,
every telephone company had its own proprietary optical TDM system. After
AT&T was broken up in 1984, local telephone companies had to connect to
multiple long-distance carriers, all with different optical TDM systems, so the
need for standardization became obvious. In 1985, Bellcore, the RBOCs research
arm, began working on a standard, called SONET (Synchronous Optical NETwork).
Later, CCITT joined the effort, which resulted in a SONET standard and a set of
parallel CCITT recommendations (G.707, G.708, and G.709) in 1989. The CCITT
recommendations are called SDH (Synchronous Digital Hierarchy) but differ from
SONET only in minor ways. Virtually all the long-distance telephone traffic in
the United States, and much of it elsewhere, now uses trunks running SONET in
the physical layer. For additional information about SONET, see (Bellamy, 2000;
Goralski, 2000; and Shepard, 2001).
The SONET design had four major
goals. First and foremost, SONET had to make it possible for different carriers
to interwork. Achieving this goal required defining a common signaling standard
with respect to wavelength, timing, framing structure, and other issues.
Second, some means was needed to
unify the U.S., European, and Japanese digital systems, all of which were based
on 64-kbps PCM channels, but all of which combined them in different (and
incompatible) ways.
Third, SONET had to provide a way to
multiplex multiple digital channels. At the time SONET was devised, the
highest-speed digital carrier actually used widely in the United States was T3,
at 44.736 Mbps. T4 was defined, but not used much, and nothing was even defined
above T4 speed. Part of SONET's mission was to continue the hierarchy to
gigabits/sec and beyond. A standard way to multiplex slower channels into one
SONET channel was also needed.
Fourth, SONET had to provide support
for operations, administration, and maintenance (OAM). Previous systems did not
do this very well.
An early decision was to make SONET
a traditional TDM system, with the entire bandwidth of the fiber devoted to one
channel containing time slots for the various subchannels. As such, SONET is a
synchronous system. It is controlled by a master clock with an accuracy of
about 1 part in 109. Bits on a SONET line are sent out at extremely
precise intervals, controlled by the master clock. When cell switching was
later proposed to be the basis of ATM, the fact that it permitted irregular
cell arrivals got it labeled as Asynchronous Transfer Mode to contrast it to
the synchronous operation of SONET. With SONET, the sender and receiver are
tied to a common clock; with ATM they are not.
The basic SONET frame is a block of
810 bytes put out every 125 µsec. Since SONET is synchronous, frames are
emitted whether or not there are any useful data to send. Having 8000
frames/sec exactly matches the sampling rate of the PCM channels used in all
digital telephony systems.
The 810-byte SONET frames are best
described as a rectangle of bytes, 90 columns wide by 9 rows high. Thus, 8 x
810 = 6480 bits are transmitted 8000 times per second, for a gross data rate of
51.84 Mbps. This is the basic SONET channel, called STS-1 (Synchronous
Transport Signal-1). All SONET trunks are a multiple of STS-1.
The first three columns of each frame
are reserved for system management information, as illustrated in Fig. 2-36. The first three rows contain the
section overhead; the next six contain the line overhead. The section overhead
is generated and checked at the start and end of each section, whereas the line
overhead is generated and checked at the start and end of each line.
A SONET transmitter sends
back-to-back 810-byte frames, without gaps between them, even when there are no
data (in which case it sends dummy data). From the receiver's point of view,
all it sees is a continuous bit stream, so how does it know where each frame
begins? The answer is that the first two bytes of each frame contain a fixed
pattern that the receiver searches for. If it finds this pattern in the same
place in a large number of consecutive frames, it assumes that it is in sync
with the sender. In theory, a user could insert this pattern into the payload
in a regular way, but in practice it cannot be done due to the multiplexing of
multiple users into the same frame and other reasons.
The remaining 87 columns hold 87 x 9
x 8 x 8000 = 50.112 Mbps of user data. However, the user data, called the SPE (Synchronous
Payload Envelope), do not always begin in row 1, column 4. The SPE can begin
anywhere within the frame. A pointer to the first byte is contained in the
first row of the line overhead. The first column of the SPE is the path
overhead (i.e., header for the end-to-end path sublayer protocol).
The ability to allow the SPE to
begin anywhere within the SONET frame and even to span two frames, as shown in Fig. 2-36, gives added flexibility to the system.
For example, if a payload arrives at the source while a dummy SONET frame is
being constructed, it can be inserted into the current frame instead of being
held until the start of the next one.
The SONET multiplexing hierarchy is
shown in Fig. 2-37. Rates from STS-1 to STS-192 have been
defined. The optical carrier corresponding to STS-n is called OC-n but is bit
for bit the same except for a certain bit reordering needed for
synchronization. The SDH names are different, and they start at OC-3 because
CCITT-based systems do not have a rate near 51.84 Mbps. The OC-9 carrier is
present because it closely matches the speed of a major high-speed trunk used
in Japan. OC-18 and OC-36 are used in Japan. The gross data rate includes all
the overhead. The SPE data rate excludes the line and section overhead. The
user data rate excludes all overhead and counts only the 86 payload columns.
As an aside, when a carrier, such as
OC-3, is not multiplexed, but carries the data from only a single source, the
letter c (for concatenated) is appended to the designation, so OC-3 indicates a
155.52-Mbps carrier consisting of three separate OC-1 carriers, but OC-3c
indicates a data stream from a single source at 155.52 Mbps. The three OC-1
streams within an OC-3c stream are interleaved by column, first column 1 from
stream 1, then column 1 from stream 2, then column 1 from stream 3, followed by
column 2 from stream 1, and so on, leading to a frame 270 columns wide and 9
rows deep.
No comments:
Post a Comment
silahkan membaca dan berkomentar