CONGESTION
CONTROL
As
with routing, the concept of traffic control in a packet-switching network is
complex,
and
a wide variety of approaches have been proposed. The objective here is to
maintain
the number of packets within the network below the level at which performance
falls
off dramatically.
To
understand the issue involved in congestion control, we need to look at
some
results from queuing theory. In essence, a packet-switching network is a
network
of
queues. At each node, there is a queue of packets for each outgoing channel.
If
the rate at which packets arrive and queue up exceeds the rate at which packets
can
be transmitted, the queue size grows without bound and the delay
experienced
by a packet goes to infinity. Even if the packet arrival rate is less than
the
packet transmission rate, queue length will grow dramatically as the arrival
rate
approaches
the transmission rate. We saw this kind of behavior in Figure 7.16. As a
rule
of thumb, when the line for which packets are queuing becomes more than
80%
utilized, the queue length grows at an alarming rate.
Consider
the queuing situation at a single packet-switching node, such as is
illustrated
in Figure 9.13. Any given node has a number of transmission links
attached
to it: one or more to other packet-switching nodes, and zero or more to
host
systems. On each link, packets arrive and depart. We can consider that there
are
two buffers at each link, one to accept arriving packets, and one to hold packets
that
are waiting to depart. In practice, there might be two fixed-size buffers
associated
with
each link, or there might be a pool of memory available for all buffering
activities.
In the latter case, we can think of each link having two variable-size
buffers
associated with it, subject to the constraint that the sum of all buffer sizes
is
a
constant.
In
any case, as packets arrive, they are stored in the input buffer of the
corresponding
link.
The node examines each incoming packet to make a routing decision,
and
then moves the packet to the appropriate output buffer. Packets queued up for
output
are transmitted as rapidly as possible; this is, in effect, statistical
timedivision
multiplexing.
Now, if packets arrive too fast for the node to process them
(make
routing decisions), or faster than packets can be cleared from the outgoing
buffers,
then, eventually, packets will arrive for which no memory is available.
When
such a saturation point is reached, one of two general strategies can be
adopted.
The first such strategy is to simply discard any incoming packet for which
there
is no available buffer space. The alternative is for the node that is
experiencing
these
problems to exercise some sort of flow control over its neighbors so that
the
traffic flow remains manageable. But, as Figure 9.14 illustrates, each of a
node's
neighbors
is also managing a number of queues. If node 6 restrains the flow of packets
from
node 5, this causes the output buffer in node 5 for the link to node 6 to fill
up.
Thus, congestion at one point in the network can quickly propagate throughout
a
region or throughout all of the network. While flow control is indeed a
powerful
tool,
we need to use it in such a way as to manage the traffic on the entire network.
Figure
9.15 shows the effect of congestion in general terms. Figure 9.15a plots
the
throughput of a network (number of packets delivered to destination stations)
versus
the offered load (number of packets transmitted by source stations). Both
axes
are normalized to the maximum capacity of the network, which can be
expressed
as the rate at which the network is theoretically capable of handling pack
ets.
In the ideal case, throughput and, hence, network utilization increase to
accommodate
an
offered load up to the maximum capacity of the network. Utilization
then
remains at 100%. The ideal case, of course, requires that all stations somehow
know
the timing and rate of packets that can be presented to the network, which is
impossible.
If no congestion control is exercised, we have the curve labeled
"uncontrolled."
As
the load increases, utilization increases for a while. Then as the queue
lengths
at the various nodes begin to grow, throughput actually drops because the
buffers
at each node are of finite size. When a node's buffers are full, it must
discard
packets.
Thus, the source stations must retransmit the discarded packets in
addition
to the new packets; this only exacerbates the situation: As more and more
packets
are retransmitted, the load on the system grows, and more buffers become
saturated.
While the system is trying desperately to clear the backlog, stations are
pumping
old and new packets into the system. Even successfully delivered packets
may
be retransmitted because it takes so long to acknowledge them: The sender
assumes
that the packet did not go through. Under these circumstances, the effective
capacity
of the system is virtually zero.
It
is clear that these catastrophic events must be avoided; this is the task of
congestion
control. The object of all congestion-control techniques is to limit queue
lengths
at the nodes so as to avoid throughput collapse. This control involves some
unavoidable
overhead. Thus, a congestion-control technique cannot perform as
well
as the theoretical ideal. However, a good congestion-control strategy will
avoid
throughput
collapse and maintain a throughput that differs from the ideal by an
amount
roughly equal to the overhead of the control.
Figure
9.15b points out that no matter what technique is used, the average
delay
experienced by packets grows without bound as the load approaches the
capacity
of the system. Note that initially the uncontrolled policy results in less
delay
than a controlled policy, because of its lack of overhead. However, the
uncontrolled
policy
will saturate at lower load.
A
number of control mechanisms for congestion control in packet-switching
networks
have been suggested and tried. The following are examples:
1. Send a control packet from a congested
node to some or all source nodes. This
choke
packet will have the effect of stopping or slowing the rate of transmission
from
sources and, hence, limit the total number of packets in the network.
This
approach requires additional traffic on the network during a period of
congestion.
2.
Rely
on routing information. Routing algorithms, such as ARPANETs, provide
link
delay information to other nodes, which influences routing decisions.
This
information could also be used to influence the rate at which new packets
are
produced. Because these delays are being influenced by the routing
decision,
they may vary too rapidly to be used effectively for congestion control.
3.
Make
use of an end-to-end probe packet. Such a packet could be timestamped
to
measure the delay between two particular endpoints. This procedure
has
the disadvantage of adding overhead to the network.
4.
Allow
packet-switching nodes to add congestion information to packets as
they
go by. There are two possible approaches here. A node could add such
information
to packets going in the direction opposite of the congestion. This
information
quickly reaches the source node, which can reduce the flow of
packets
into the network. Alternatively, a node could add such information to
packets
going in the same direction as the congestion. The destination either
asks
the source to adjust the load or returns the signal back to the source in
the
packets (or acknowledgments) going in the reverse direction.
X.25
Perhaps
the best-known and most widely used protocol standard is X.25, which was
originally
approved in 1976 and subsequently revised in 1980,1984,1988,1992, and
1993.
The standard specifies an interface between a host system and a packetswitching
network.
This standard is almost universally used for interfacing to
packet-switching
networks and is employed for packet switching in ISDN. In this
section,
a brief overview of the standard is provided.
The
standard specifically calls for three layers of functionality (Figure 9.16):
* Physical
layer
* Link
layer
Packet
layer
These
three layers correspond to the lowest three layers of the OSI model (see
Figure
1.10). The physical layer deals with the physical interface between an
attached
station (computer, terminal) and the link that attaches that station to the
packet-switching
node. The standard refers to user machines as data terminal
equipment
(DTE) and to a packet-switching node to which a DTE is attached as
data
circuit-terminating equipment (DCE). X.25 makes use of the physical-layer
specification
in a standard known as X.21, but, in many cases, other standards, such
as
EIA-232, are substituted. The link layer provides for the reliable transfer of
data
across
the physical link by transmitting the data as a sequence of frames. The
linklayer
standard
is referred to as LAPB (Link Access Protocol-Balanced). LAPB is
a
subset of HDLC, described in Lesson 6. The packet layer provides an external
virtual-circuit
service, and is described in this section.
Figure
9.17 illustrates the relationship between the levels of X.25. User data
are
passed down to X.25 level 3, which appends control information as a header,
creating
a packet. This control information is used in the operation of
the protocol,
as
we shall see. The entire X.25 packet is then passed down to the LAPB entity,
which
appends control information at the front and back of the packet, forming an
LAPB
frame. Again, the control information in the frame is needed for the
operation
of
the LAPB protocol.
Virtual
Circuit Service
With
the X.25 packet layer, data are transmitted in packets over external virtual
circuits.
The
virtual-circuit service of X.25 provides for two types of virtual circuit:
virtual
call
and permanent virtual circuit. A virtual call is a dynamically
established virtual
circuit
using a call setup and call clearing procedure, explained below. A
permanent
virtual circuit is
a fixed, network-assigned virtual circuit. Data transfer
occurs
as with virtual calls, but no call setup or clearing is required.
Figure
9.18 shows a typical sequence of events in a virtual call. The left-hand
part
of the figure shows the packets exchanged between user machine A and the
packet-switching
node to which it attaches; the right-hand part shows the packets
exchanged
between user machine B and its node. The routing of packets inside the
network
is not visible to the user.
The
sequence of events is as follows:
1.
A
requests a virtual circuit to B by sending a Call-Request packet to A's DCE.
The
packet includes the source and destination addresses, as well as the
virtual-circuit
number to be used for this new virtual circuit. Future incoming
and
outgoing transfers will be identified by this virtual-circuit number.
2.
The
network routes this call request to B's DCE.
3.
B's
DCE receives the Call Request and sends an Incoming-Call packet to B.
This
packet has the same format as the Call-Request packet but utilizes a different
virtual-circuit
number, selected by B's DCE from the set of locally
unused
numbers.
4.B
indicates acceptance of the call by sending a Call-Accepted packet specifying
the
same virtual circuit number as that of the Incoming-Call packet.
A's
DCE receives the Call Accepted and sends a Call-Connected packet to A.
5.This
packet
has the same format as the Call-Accepted packet but the same
virtual-circuit
number as that of the original Call-Request packet.
6.A
and B send data and control packets to each other using their respective
virtual-circuit
numbers.
7.A
(or B)
sends
a Clear-Request packet to terminate the virtual circuit and
receives
a Clear-Confirmation packet.
8.
B
(or A) receives a Clear-Indication packet and transmits a Clear-
Confirmation
packet.
We
now turn to some of the details of the standard.
Packet Format
Figure
9.19 shows the basic X.25 packet formats. For user data, the data are broken
up
into blocks of some maximum size, and a 24-bit or 32-bit header is appended to
each
block to form a data packet. The header includes a 12-bit
virtual-circuit number
(expressed
as a 4-bit group number and an %bit channel number). The P(S) and
P(R)
fields support the functions of flow control and error control on a virtualcircuit
basis,
as explained below. The M and D bits are described below. The Q
bit
is
not defined in the standard, but allows the user to distinguish two types of
data.
In
addition to transmitting user data, X.25 must transmit control information
related
to the establishment, maintenance, and termination of virtual circuits. Control
information
is transmitted in a control packet. Each control packet includes the
virtual-circuit
number, the packet type, which identifies the particular control function,
as
well as additional control information related to that function. For example,
a
Call-Request packet includes the following additional fields:
Calling
DTE address length (4 bits): length of the corresponding address field
in
4-bit units.
a Called DTE address length (4 bits):
length of the corresponding address field
in
4-bit units.
DTE
addresses (variable): the calling and called DTE addresses.
a Facilities: a sequence of facility
specifications. Each specification consists of
an
&bit facility code and zero or more parameter codes. An example of a
facility
is reverse charging.
Table
9.3 lists all of the X.25 packets. Most of these have already been discussed.
A
brief description of the remainder follow.
A
DTE may send an Interrupt packet that bypasses the flow-control procedures
for
data packets. The interrupt packet is to be delivered to the destination
DTE
by the network at a higher priority than data packets in transit. An example
of
the use of this capability is the transmission of a terminal-break character.
The
Reset packets provide a facility for recovering from an error by reinitializing
a
virtual circuit, meaning that the sequence numbers on both ends are set to 0.
Any
data or interrupt packets in transit are lost. A reset can be triggered by a
number
of
error conditions, including loss of a packet, sequence number error,
congestion,
or
loss of the network's internal logical connection. In the latter case, the two
DCEs
must rebuild the internal logical connection to support the still-existing X.25
DTE-DTE
virtual circuit.
A
more serious error condition is dealt with by a Restart, which terminates all
active
virtual calls. An example of a condition warranting restart is temporary loss
of
access to the network.
The
Diagnostic packet provides a means to signal certain error conditions that
do
not warrant reinitialization. The Registration packets are used to invoke and
confirm
X.25 facilities.
Multiplexing
Perhaps
the most important service provided by X.25 is multiplexing. A DTE is
allowed
to establish up to 4095 simultaneous virtual circuits with other DTEs over
a
single physical DTE-DCE link. The DTE can internally assign these circuits in
any
way it pleases. Individual virtual circuits could correspond to applications,
processes,
or terminals, for example. The DTE-DCE link provides full-duplex
multiplexing;
that is, at any time, a packet associated with a given virtual circuit can
be
transmitted in either direction.
To
sort out which packets belong to which virtual circuits, each packet contains
a
12-bit virtual-circuit number (expressed as a 4-bit logical group number plus
an
8-bit logical channel number). The assignment of virtual-circuit numbers
follows
the
convention depicted in Figure 9.20. Number zero is always reserved for
diagnostic
packets
common to all virtual circuits. Then, contiguous ranges of numbers
are
allocated for four categories of virtual circuits. Permanent virtual circuits
are
assigned
numbers beginning with 1. The next category is one-way, incoming virtual
calls.
This means that only incoming calls from the network can be assigned these
numbers;
the virtual circuit, however, is two-way (full duplex). When a call request
comes
in, the DCE selects an unused number from this category.
One-way
outgoing calls are those initiated by the DTE. In this case, the DTE
selects
an unused number from among those allocated for these calls. This separa
tion
of categories is intended to avoid the simultaneous selection of the same
number
for
two different virtual circuits by the DTE and DCE.
The
two-way virtual-call category provides an overflow for allocation shared
by DTE and DCE, allowing
for peak differences in traffic flow.
No comments:
Post a Comment
silahkan membaca dan berkomentar