SDH-Based
Physical Layer
Alternatively,
ATM cells can be carried over a line using SDH (synchronous digital
hierarchy)
or SONET. For the cell-based physical layer, framing is imposed
using
the STM-1 (STS-3) frame (Figure 7.12b). Figure 11.11 shows the payload portion
of
an STM-1 frame. This payload may be offset from the beginning of the
frame,
as indicated by the pointer in the section overhead of the frame. As can be
seen,
the payload consists of a 9-octet path overhead portion and the remainder,
which
contains ATM cells. Because the payload capacity (2,340 octets) is not an
integer
multiple of the cell length (53 octets), a cell may cross a payload boundary.
The
H4 octet in the path overhead is set at the sending side to indicate the
next
occurrence of a cell boundary. That is, the value in the H4 field indicates the
number
of octets in the first cell boundary following the H4 octet. The permissible
range
of values is 0 to 52.
The
advantages of the SDH-based approach include the following:
It
can be used to carry either ATM-based or STM-based (synchronous transfer
mode)
payloads, making it possible to initially deploy a high-capacity.
fiber-based
transmission infrastructure for a variety of circuit-switched and
dedicated
applications, and then readily migrate to the support of ATM.
Some
specific connections can be circuit-switched using an SDH channel. For
example,
a connection carrying constant-bit-rate video traffic can be mapped
into
its own exclusive payload envelope of the STM-1 signal, which can be circuit
switched.
This procedure may be more efficient than ATM switching.
Using
SDH synchronous multiplexing techniques, several ATM streams can
be
combined to build interfaces with higher bit rates than those supported by
the
ATM layer at a particular site. For example, four separate ATM streams,
each
with a bit rate of 155 Mbps (STM-I), can be combined to build a 622-
Mbps
(STM-4) interface. This arrangement may be more cost effective than
one
using a single 622-Mbps ATM stream.
ATM
Adaptation Layer
The
use of ATM creates the need for an adaptation layer to support information
transfer
protocols not based on ATM. Two examples are PCM (pulse code modulation)
voice
and LAPF. PCM voice is an application that produces a stream of bits
from
a voice signal. To employ this application over ATM, it is necessary to
assemble
PCM
bits into cells for transmission and to read them out on reception in such
a
way as to produce a smooth, constant flow of bits to the receiver. LAPF is the
standard
data link control protocol for frame relay. In a mixed environment, in
which
frame relay networks interconnect with ATM networks, a convenient way of
integrating
the two is to map LAPF frames into ATM cells; this will usually mean
segmenting
one LAPF frame into a number of cells on transmission, and then
reassembling
the frame from cells on reception. By allowing the use of LAPF over
ATM,
all of the existing frame relay applications and control signaling protocols
can
be
used on an ATM network.
AAL
Services
ITU-T
1,362 lists the following general examples of services provided by AAL:
8
Handling
of transmission errors
s Segmentation and reassembly, to enable
larger blocks of data to be carried in
the
information field of ATM cells
0
Handling
of lost and misinserted cell conditions
Flow
control and timing control
In
order to minimize the number of different AAL protocols that must be
specified
to meet a variety of needs, ITU-T has defined four classes of service that
cover
a broad range of requirements (Figure 11.12). The classification is based on
whether
a timing relationship must be maintained between source and destination,
whether
the application requires a constant bit rate, and whether the transfer is
connection-oriented
or connectionless. An example of a class A service is circuit
emulation.
In this case, a constant bit rate, which requires the maintenance of a timing
relation,
is used, and the transfer is connection-oriented. An example of a class
B service is variable-bit-rate video, such as might be
used in a videoconference.
Here,
the application is connection oriented and timing is important, but the bit
rate
varies
depending on the amount of activity in the scene. Classes C and D correspond
to
data-transfer applications. In both cases, the bit rate may vary and no
particular
timing
relationship is required; differences in data rate are handled, using
buffers,
by the end systems. The data transfer may be either connection-oriented
(class
C) or connectionless (class D).
AAE Protocols
To
support these various classes of service, a set of protocols at the AAL level
have
been
defined. The AAL layer is organized into two logical sublayers: the Convergence
Sublayer
(CS) and the Segmentation and Reassembly Sublayer (SAR). The
convergence
sublayer provides the functions needed to support specific applications
using
AAL. Each AAL user attaches to AAL at a service access point (SAP), which
is
simply the address of the application. This sublayer is, then, service
dependent.
The
segmentation and reassembly sublayer is responsible for packaging information
received
from CS into cells for transmission and unpacking the information
at
the other end. As we have seen, at the ATM layer, each cell consists of a
5-octet
header
and a 48-octet information field. Thus, SAR must pack any SAR headers
and
trailers, plus CS information, into 48-octet blocks.
Initially,
ITU-T defined one protocol type for each class of service, named
Type
1
through
Type 4. Actually, each protocol type consists of two protocols, one
at
the CS sublayer and one at the SAR sublayer. More recently, types 3 and 4 were
merged
into a Type 314, and a new type, Type 5, was defined. Figure 11.12 shows
which
services are supported by which types. In all of these cases, a block of data
from
a higher layer is encapsulated into a protocol data unit (PDU) at the CS
sublayer.
In fact, this sublayer is referred to as the common-part convergence sublayer
(CPCS),
leaving open the possibility that additional, specialized functions may
be
performed at the CS level. The CPCS PDU is then passed to the SAR sublayer,
where
it is broken up into payload blocks. Each payload block can fit into an SARPDU,
which
has a total length of 48 octets. Each 48-octet SAR-PDU fits into a single
ATM
cell.
Figure
11.13 shows the formats of the protocol data units (PDUs) at the SAR
level
except for Type 2,
which
has not yet been defined.
In
the remainder of this section, we look at AAL Type 5, which is becoming
increasingly
popular, especially in ATM LAN applications. This protocol was introduced
to
provide a streamlined transport facility for higher-layer protocols that are
connection-oriented.
If it is assumed that the higher layer takes care of connection
management,
and that the ATM layer produces minimal errors, then most of the
fields
in the SAR and CPCS PDUs are not necessary. For example, with
connectionoriented
service,
the MID field is not necessary. This field is used in AAL 314 to
multiplex
different streams of data using the same virtual ATM connection
(VCIIVPI).
In AAL 5, it is assumed that higher-layer software takes care of such
multiplexing.
Type
5 was introduced to
Reduce
protocol-processing overhead
Reduce
transmission overhead
Ensure
adaptability to existing transport protocols
To
understand the operation of Type 5, let us begin with the CPCS level. The
CPCS-PDU
(Figure 11.14) includes a trailer with the following fields:
0
CPCS
User-to-User Indication (I octet). Used to
transparently transfer userto-
user
information.
Cyclic
Redundancy Check (4 octets). Used to detect
bit errors in the CPCSPDU.
0
Common
Part Indicator (I octet). Indicates the interpretation of
the remaining
fields
in the CPCS-PDU trailer. Currently, only one interpretation is
defined.
Length
(2 octets). Length of the CPCS-PDU payload field.
The
payload from the next higher layer is padded out so that the entire CPCSPDU
is
a multiple of 48 octets.
The
SAR-PDU consists simply of 48 octets of payload, carrying a portion of
the
CPCS-PDU. The lack of protocol overhead has several implications:
1. Because there is no sequence number, the
receiver must assume that all SARPDUs
arrive
in the proper order for reassembly. The CRC field in the CPCSPDU
is
intended to verify such an order.
2.
The
lack of MID field means that it is not possible to interleave cells from
different
CPCS-PDUs.
Therefore, each successive SAR-PDU carries a portion
of
the current CPCS-PDU, or the first block of the next CPCS-PDU. To distinguish
between
these two cases, the ATM user-to-user indication (AAU) bit
in
the payload-type field of the ATM cell header is used (Figure 11.4). A
CPCS-PDU
consists of zero or more consecutive SAR-PDUs with AAU set
to
0, followed immediately by an SAR-PDU with AAU set to 1.
3.
The
lack of an LI field means that there is no way for the SAR entity to
distinguish
between
CPCS-PDU octets and filler in the last SAR-PDU. Therefore,
there
is no way for the SAR entity to find the CPCS-PDIJ trailer in the
last
SAR-PDU. To avoid this situation, it is required that the CPCS-PDU payload
be
padded out so that the last bit of the CPCS-trailer occurs as the last bit
of
the final SAR-PDU.
Figure
11.15 shows an example of AAL 5 transmission. The CPCS-PDU,
including
padding and trailer, is divided into 48-octet blocks. Each block is transmitted
in
a single ATM cell.
Traffic And Congestion Control
As
is the case with frame relay networks, traffic and congestion control
techniques
are
vital to the successful operation of ATM-based networks. Without such
techniques,
traffic
from user nodes can exceed the capacity of the network, causing
memory
buffers of ATM switches to overflow, leading to data losses.
ATM networks present difficulties in effectively
controlling congestion not
found
in other types of networks, including frame relay networks. The complexity
of
the problem is compounded by the limited number of overhead bits available for
exerting
control over the flow of user cells. This area is currently the subject of
intense
research, and no consensus has emerged for a full-blown traffic- and
congestion-control
strategy. Accordingly, ITU-T has defined a restricted initial set
of
traffic- and congestion-control capabilities aiming at simple mechanisms and
realistic
network efficiency; these are specified in 1.371.
We
begin with an overview of the congestion problem and the framework
adopted
by ITU-T. We see that the focus of the mechanisms so far adopted is on
control
schemes for delay-sensitive traffic, such as voice and video. These schemes
are
not suited for handling bursty traffic, which is the subject of ongoing
research
and
standardization efforts. The discussion then turns to traffic control, which
refers
to
the set of actions taken by the network to avoid congestion. Finally, we
examine
congestion
control, which refers to the set of actions taken by the network to minimize
the
intensity, spread, and duration of congestion once congestion has already
occurred.
Requirements for ATM Traffic and Congestion Control
Both
the types of traffic patterns imposed on ATM network and the transmission
characteristics
of those network differ markedly from those of other switching networks.
Most
packet-switched and frame relay networks carry non-real-time data
traffic.
Typically, the traffic on individual virtual circuits or frame relay
connections
is
bursty in nature, and the receiving system expects to receive incoming traffic
on
each
connection in such a fashion. As a result,
1. The network does not need to replicate
the exact timing pattern of incoming
traffic
at the exit node.
2.
Therefore, simple statistical multiplexing can be used to accommodate multiple
logical
connections over the physical interface between user and network.
The
average data rate required by each connection is less than the burst rate
for
that connection, and the user-network interface (UNI) need only be
designed
for a capacity somewhat greater than the sum of the average data
rates
for all connections.
A
number of tools are available for control of congestion in packet-switched
and
frame relay networks, as we have seen in the preceding two lessons. These
types
of congestion-control schemes are inadequate for ATM networks. [GERS91]
cites
the following reasons:
I. The majority of traffic is not amenable
to flow control. For example, voice and
video
traffic sources cannot stop generating cells even when the network is
congested.
2.
Feedback is slow due to the drastically reduced cell transmission time compared
to
propagation delays across the network.
3.
ATM
networks typically support a wide range of applications requiring capacity
ranging
from a few kbps to several hundred Mbps. Relatively simpleminded
congestion
control schemes generally end up penalizing one end or
the
other of that spectrum.
4.
Applications on ATM networks may generate very different traffic patterns
(e.g.,
constant bit-rate versus variable bit-rate sources). Again, it is difficult for
conventional
congestion control techniques to handle fairly such variety.
5.
Different applications on ATM networks require different network services
(e.g.,
delay-sensitive service for voice and video, and loss-sensitive service for
data).
6.
The
very high speeds in switching and transmission make ATM networks
more
volatile in terms of congestion and traffic control. A scheme that relies
heavily
on reacting to changing conditions will produce extreme and wasteful
fluctuations
in routing policy and flow control.
A
key issue that relates to the above points is cell delay variation, a topic to
which
we
now turn.
Cell-Delay Variation
For
an ATM network, voice and video signals can be digitized and transmitted as a
stream
of cells. A key requirement, especially for voice, is that the delay across the
network
be short; generally, this will be the case for ATM networks. As we have
discussed,
ATM is designed to minimize the processing and transmission overhead
internal
to the network so that very fast cell switching and routing are possible.
There
is another important requirement that, to some extent, conflicts with
the
preceding requirement, namely that the rate of delivery of cells to the
destination
user
must be constant. Now, it is inevitable that there will be some variability
in
the rate of delivery of cells, due both to effects within the network and at
the
source
UNI; we summarize these effects presently. First, let us consider how the
destination
user might cope with variations in the delay of cells as they transit from
source
user to destination user.
A
general procedure for achieving a constant bit rate (CBR) is illustrated in
Figure
11.16. Let D(i) represent the end-to-end delay experienced by the ith
cell.
The
destination system does not know the exact amount of this delay; there is no
timestamp
information associated with each cell, and, even if there were, it is
impossible
to keep source and destination clocks perfectly synchronized. When the
first
cell on a connection arrives at time t(O), the target user delays the cell an
additional
amount
V(0) prior to delivery to the application. V(0) is an estimate of the
If
the computed value of V(i) is negative, then that cell is discarded. The result
is
that
data is delivered to the higher layer at a constant bit rate, with occasional
gaps
due
to dropped cells.
The
amount of the initial delay V(O), which is also the average delay applied
to
all incoming cells, is a function of the anticipated cell-delay variation. To
minimize
this
delay, a subscriber will therefore request a minimal cell-delay variation
from
the network provider. This request leads to a trade-off; cell-delay variation
can
be
reduced by increasing the data rate at the UNI, relative to the load, and by
increasing
resources within the network.
Network
Contribution to Cell-Delay Variation
One
component of cell-delay variation is due to events within the network. For
packet-switching
networks, packet delay variation can be considerable, due to
queuing
effects at each of the intermediate switching nodes; to a lesser extent, this
is
also true of frame delay variation in frame relay networks. However, in the
case
of
ATM networks, cell-delay variations due to network effects are likely to be
minimal;
the
principal reasons for this are the following:
1. The ATM protocol is designed to minimize
processing overhead at intermediate
switching
nodes. The cells are fixed-size with fixed-header formats, and
there
is no flow control or error control processing required.
2.
To accommodate the high speeds of ATM networks, ATM switches have had
to
be designed to provide extremely high throughput. Thus. the processing
time
for an individual cell at a node is negligible.
The
only factor that could lead to noticeable cell-delay variation within the
network
is congestion. If the network begins to become congested, either cells must
be
discarded or there will be a buildup of queuing delays at affected switches.
Thus,
it
is important that the total load accepted by the network at any time not be
such
as
to cause congestion.
Cell-Delay
Variation at the UNI
Even
if an application generates data for transmission at a constant bit rate,
celldelay
variation
can occur at the source due to the processing that takes place at the
three
layers of the ATM model.
Figure
11.17 illustrates the potential causes of cell-delay variation. In this
example,
ATM connections A and B support user data rates of X and Y Mbps,
respectively.
At the AAL level, data is segmented into 48-octet blocks. Note that
on
a time diagram, the blocks appear to be of different sizes for the two
connections;
specifically,
the time required to generate a 48-octet block of data in
microseconds
is
The
ATM layer encapsulates each segment into a 53-octet cell. These cells
must
be interleaved and delivered to the physical layer to be transmitted at the
data
rate
of the physical link. Delay is introduced into this interleaving process: If two
cells
from different connections arrive at the ATM layer at overlapping times, one of
the
cells must be delayed by the amount of the overlap. In addition, the ATM layer
is
generating OAM (operation and maintenance) cells that must also be interleaved
with
user cells.
At
the physical layer, there is additional opportunity for the introduction of
further
cell delays. For example, if cells are transmitted in SDH frames, overhead
bits
for those frames will be inserted into the physical link, thereby delaying bits
from
the ATM layer.
None
of the delays just listed can be predicated in any detail, and none follow
any
repetitive pattern. Accordingly, there is a random element to the time interval
between
reception of data at the ATM layer from the AAL and the transmission of
that
data in a cell across the UNI.
Traffic and Congestion Control Framework
1.371
lists the following objectives of ATM layer traffic and congestion control:
e
ATM
layer traffic and congestion control should support a set of ATM layer
Quality
of Service (QOS) classes sufficient for all foreseeable network services;
the
specification of these QOS classes should be consistent with network
performance
parameters currently under study.
rn
ATM
layer traffic and congestion control should not rely on AAL protocols
that
are network-service specific, nor on higher-layer protocols that are
application
specific.
Protocol layers above the ATM layer may make use of information
provided
by the ATM layer to improve the utility those protocols can
derive
from the network.
a
The
design of an optimum set of ATM layer traffic controls and congestion
controls
should minimize network and end-system complexity while maximizing
network
utilization.
In
order to meet these objectives, ITU-T has defined a collection of traffic and
congestion
control functions that operate across a spectrum of timing intervals.
Table
11.3 lists these functions with respect to the response times within which they
operate.
Four levels of timing are considered:
e
Cell
insertion time. Functions
at this level react immediately to cells as they
are
transmitted.
a
Round-trip
propagation time. At
this level, the network responds within the
lifetime
of a cell in the network, and may provide feedback indications to the
source.
e
Connection
duration. At
this level, the network determines whether a new
connection
at a given QOS can be accommodated and what performance levels
will
be agreed to.
e
Long
term. These
are controls that affect more than one ATM connection and
that
are established for long-term use.
The
essence of the traffic-control strategy is based on (1) determining whether
a
given new ATM connection can be accommodated and (2) agreeing with the
subscriber
on
the performance parameters that will be supported. In effect, the subscriber
and
the network enter into a traffic contract: The network agrees to support
traffic
at a certain level on this connection, and the subscriber agrees not to exceed
performance
limits. Traffic control functions are concerned with establishing these
traffic
parameters and enforcing them. Thus, they are concerned with congestion
avoidance.
If traffic control fails in certain instances, then congestion may occur. At
this
point, congestion-control functions are invoked to respond to and recover from
the
congestion.
Traffic Control
A
variety of traffic control functions have been defined to maintain the QOS of
ATM
connections. These include
Network
resource management
o
Connection
admission control
Usage
parameter control
Priority
control
o
Fast
resource management
We
examine each of these in turn.
Network
Resource Management
The
essential concept behind network resource management is to allocate network
resources
in such a way as to separate traffic flows according to service
characteristics.
So
far, the only specific traffic control function based on network resource
management
deals with the use of virtual paths.
As
discussed earlier, a virtual path connection (VPC) provides a convenient
means
of grouping similar virtual channel connections (VCCs). The network provides
aggregate
capacity and performance characteristics on the virtual path, and
these
are shared by the virtual connections. There are three cases to consider:
User-to-user
application. The
VPC extends between a pair of UNIs. In this
case,
the network has no knowledge of the QOS of the individual VCCs
within
a VPC. It is the user's responsibility to assure that the aggregate
demand
from the VCCs can be accommodated by the VPC.
o
User-to-network
application. The
VPC extends between a UNI and a network
node.
In this case, the network is aware of the QOS of the VCCs within
the
VPC and has to accommodate them.
Network-to-network
application. The
VPC extends between two network
nodes.
Again, in this case, the network is aware of the QOS of the VCCs
within
the VPC and has to accommodate them.
The
QOS parameters that are of primary concern for network resource management
are
cell loss ratio, cell transfer delay, and cell delay variation, all of which
are
affected by the number of resources devoted to the VPC by the network. If a
VCC
extends through multiple VPCs, then the performance on that VCC depends
on
the performances of the consecutive VPCs, and on how the connection is handled
at
any node that performs VCC-related functions. Such a node may be a
switch,
concentrator, or other network equipment. The performance of each VPC
depends
on the capacity of that VPC and the traffic characteristics of the VCCs
contained
within
the VPC. The performance of each VCC-related function depends on
the
switching/processing speed at the node and on the relative priority with which
various
cells are handled.
Figure
11.18 gives an example. VCCs 1 and 2 experience a performance that
depends
on VPCs b and c
and
on how these VCCs are handled by the intermediate
nodes;
this may differ from the performance experienced by VCCs 3,4, and 5.
There
are a number of alternatives for the way in which VCCs are grouped
and
the type of performance they experience. If all of the VCCs within a VPC are
handled
similarly, then they should experience similar expected network performance,
in
terms of cell-loss ratio, cell-transfer delay, and cell-delay variation.
Alternatively,
when
different VCCs within the same VPC require different QOS, the
VPC
performance objective agreed upon by network and subscriber should be suitably
set
for the most demanding VCC requirement.
In
either case, with multiple VCCs within the same VPC, the network has two
general
options for allocating capacity to the VPC:
1. Aggregate peak demand. The network may
set the capacity (data rate) of the
VPC
equal to the total of the peak data rates of all of the VCCs within the
VPC.
The advantage of this approach is that each VCC can be given a QOS
that
accommodates its peak demand. The disadvantage is that most of the
time,
the VPC capacity will not be fully utilized, and, therefore, the network
will
have underutilized resources.
2.
Statistical
multiplexing. If
the network sets the capacity of the VPC to be
greater
than or equal to the average data rates of all the VCCs but less than
the
aggregate peak demand, then a statistical multiplexing service is supplied.
With
statistical multiplexing, VCCs experience greater cell-delay variation
and
greater cell-transfer delay. Depending on the size of buffers used to queue
cells
for transmission, VCCs may also experience greater cell-loss ratio. This
approach
has the advantage of more efficient utilization of capacity, and is
attractive
if the VCCs can tolerate the lower QOS.
When
statistical multiplexing is used, it is preferable to group VCCs into
VPCs
on the basis of similar traffic characteristics and similar QOS requirements.
If
dissimilar VCCs share the same VPC and statistical multiplexing is used, it is
difficult
to
provide fair access to both high-demand and low-demand traffic streams.
Connection
Admission Control
Connection
admission control is the first line of defense for the network in protecting
itself
from excessive loads. In essence, when a user requests a new VPC or VCC,
the
user must specify (implicitly or explicitly) the traffic characteristics in
both
directions
for that connection. The user selects traffic characteristics by selecting a
QOS
from among the QOS classes that the network provides. The network accepts
the
connection only if it can commit the resources necessary to support that
traffic
level
while at the same time maintaining the agreed-upon QOS of existing connections.
By
accepting the connection, the network forms a traffic contract with the
user.
Once the connection is accepted, the network continues to provide the agreedupon
QOS
as long as the user complies with the traffic contract.
For
the current specification, the traffic contract consists of the four parameters
defined
in Table 11.4: peak cell rate (PCR), cell-delay variation (CDV), sustainable
cell
rate (SCR), and burst tolerance. Only the first two parameters are relevant for
a
constant bit rate (CBR) source; all four parameters may be used for variable
bit
rate
(VBR) sources.
As
the name suggests, the peak cell rate is the maximum rate at which cells are
generated
by the source on this connection. However, we need to take into account
the
cell-delay variation. Although a source may be generating cells at a constant
peak
rate, cell-delay variations introduced by various factors (see Figure 11.17)
will
affect
the timing, causing cells to clump up and gaps to occur. Thus, a source may
temporarily
exceed the peak cell rate due to clumping. For the network to properly
allocate
resources to this connection, it must know not only the peak cell rate but
also
the CDV.
The
exact relationship between peak cell rate and CDV depends on the operational
definitions
of these two terms. The standards provide these definitions in
terms
of a cell rate algorithm. Because this algorithm can be used for usage
parameter
control,
we defer a discussion until the next subsection.
The
PCR and CDV must be specified for every connection. As an option for
variable-bit
rate sources, the user may also specify a sustainable cell rate and burst
tolerance.
These parameters are analogous to PCR and CDV, respectively, but
apply
to an average rate of cell generation rather than to a peak rate. The user can
describe
the future flow of cells in greater detail by using the SCR and burst tolerance
as
well as the PCR and CDV. With this additional information, the network
may
be able to more efficiently utilize the network resources. For example, if a
number
of VCCs are statistically multiplexed over a VPC, knowledge of both average
and
peak cell rates enables the network to allocate buffers of sufficient size to
handle
the traffic efficiently without cell loss.
For
a given connection (VPC or VCC), the four traffic parameters may be
specified
in several ways, as illustrated in Table 11.5. Parameter values may be
implicitly
defined by default rules set by the network operator. In this case, all
connections
are
assigned the same values or all connections of a given class are assigned
the
same values for that class. The network operator may also associate parameter
values
with a given subscriber and assign these at the time of subscription. Finally,
parameter
values tailored to a particular connection may be assigned at connection
time.
In the case of a permanent virtual connection, these values are assigned by the
network
when the connection is set up. For a switched virtual connection, the
parameters
are
negotiated between the user and the network via a signaling protocol.
Another
aspect of quality of service that may be requested or assigned for a
connection
is cell-loss priority. A user may request two levels of cell-loss priority for
an
ATM connection; the priority of an individual cell is indicated by the user
through
the CLP bit in the cell header (see Figure 11.4). When two priority levels
are
used, the traffic parameters for both cell flows must be specified; typically,
this
is
done by specifying a set of traffic parameters for high-priority traffic (CLP = 0)
and
a set of traffic parameters for all traffic (CLP = 0 or 1). Based on this breakdown,
the
network may be able to allocate resources more efficiently.
Usage
Parameter Control
Once
a connection has been accepted by the Connection Admission Control function,
the
Usage Parameter Control (UPC) function of the network monitors the
connection
to determine whether the traffic conforms to the traffic contract. The
main
purpose of Usage Parameter Control is to protect network resources from an
overload
on one connection that would adversely affect the QOS on other connections
by
detecting violations of assigned parameters and taking appropriate actions.
Usage
parameter control can be done at both the virtual path and virtual
channel
levels. Of these, the more important is VPC-level control, as network
resources
are, in general, initially allocated on the basis of virtual paths, with the
virtual
path
capacity shared among the member virtual channels.
There
are two separate functions encompassed by usage parameter control: - Control of peak
cell rate and the associated cell-delay variation (CDV)
Control
of sustainable cell rate and the associated burst tolerance
Let
us first consider the peak cell rate and the associated cell-delay variation.
In
simple terms, a traffic flow is compliant if the peak rate of cell transmission
does
not
exceed the agreed-upon peak cell rate, subject to the possibility of cell-delay
variation
within the agreed-upon bound. 1.371 defines an algorithm, the peak cellrate
algorithm,
that monitors compliance. The algorithm operates on the basis of
two
parameters: a peak cell-rate R and a CDV tolerance limit of T. Then, T = 1/R
is
the interarrival time between cells if there were no CDV. With CDV, T is the
average
interarrival time at the peak rate. The algorithm uses a form of leakybucket
mechanism
to monitor the rate at which cells arrive in order to assure that
the
interarrival time is not too short to cause the flow to exceed the peak cell
rate
by
an amount greater than the tolerance limit.
The
same algorithm, with different parameters can be used to monitor the
sustainable
cell
rate and the associated burst tolerance. In this case, the parameters are
the
sustainable cell-rate R, and a burst tolerance T,.
The
cell-rate algorithm is rather complex: details can be found in [STAL95a].
The
cell-rate algorithm simply defines a way to monitor compliance with the traffic
contract.
To perform usage parameter control, the network must act on the results
of
the algorithm. The simplest strategy passes along compliant cells and discards
noncompliant
cells at the point of the UPC function.
At
the network's option, cell tagging may also be used for noncompliant cells.
In
this case, a noncompliant cell may be tagged with CLP = 1 (low priority)
and
passed.
Such cells are then subject to discard at a later point in the network.
If
the user has negotiated two levels of cell-loss priority for a network, then
the
situation is more complex. Recall that the user may negotiate a traffic
contract
for
high-priority traffic (CLP = 0)
and a separate contract for aggregate traffic
(CLP
0 or 1). The following rules apply:
1. A cell with CLP = 0 that conforms
to the traffic contract for CLP = 0
passes.
2.
A
cell with CLP =
0
that is noncompliant for (CLP = 0)
traffic but compliant
for
(CLP 0 or 1) traffic is tagged and passed.
3.
A
cell
with CLP =
0
that is noncompliant for (CLP = 0)
traffic and noncompliant
for
(CLP 0 or 1) traffic is discarded.
4.
A
cell with CLP =
1
that is compliant for (CLP = 1)
traffic is passed.
5.
A cell with CLP =
1
that
is noncompliant for (CLP 0 or 1) traffic is discarded.
Priority
Control
Priority
control comes into play when the network, at some point beyond the UPC
function,
discards (CLP =
1)
cells. The objective is to discard lower priority cells in
order
to protect the performance for higher-priority cells. Note that the network
has
no way to discriminate between cells that were labeled as lower-priority by the
source
and cells that were tagged by the UPC function.
Fast
Resource Management
Fast
resource management functions operate on the time scale of the round-trip
propagation
delay of the ATM connection. The current version of 1.371 lists fastresource
management
as a potential tool for traffic control that is for further study.
One
example of such a function that is given in the Recommendation is the ability
of
the network to respond to a request by a user to send a burst. That is, the
user
would
like to temporarily exceed the current traffic contract to send a relatively
large
amount of data. If the network determines that the resources exist along the
route
for this VCC or VPC for such a burst, then the network reserves those
resources
and grants permission. Following the burst, the normal traffic control is
enforced.
Congestion Control
ATM
congestion control refers to the set of actions taken by the network to
minimize
the
intensity, spread, and duration of congestion. These actions are triggered
by
congestion in one or more network elements. The following two functions have
been
defined:
Q
Selective
cell discarding
0 Explicit forward congestion indication
Selective
Cell Discarding
Selective
cell discarding is similar to priority control. In the priority control
function
(CLP
= I), cells are
discarded to avoid congestion. However, only "excess" cells are
discarded.
That is, cells are limited so that the performance objectives for the (CLP
=
0)
and (CLP =
1)
flows are still met. Once congestion actually occurs, the network
is
no longer bound to meet all performance objectives. To recover from a congested
condition,
the network is free to discard any (CLP = 1) cell and may even
discard
(CLP = 0) cells on ATM
connections that are not complying with their traffic
contract.
Explicit
Forward Congestion Indication
Explicit
forward congestion notification for ATM network works in essentially the
same
manner as for frame relay networks. Any ATM network node that is experiencing
congestion
may set an explicit forward congestion indication in the payload
type
field of the cell header of cells on connections passing through the node
(Figure
11.4).
The indication notifies the user that congestion avoidance procedures
should
be initiated for traffic in the same direction as the received cell. It
indicates
that
this cell on this ATM connection has encountered congested resources. The
user
may then invoke actions in higher-layer protocols to adaptively lower the cell
rate
of the connection.
The
network issues the indication by setting the first two bits of the payload
type
field in the cell header to 01 (Table 11.2). Once this value is set by any
node, it
may
not be altered by other network nodes along the path to the destination user.
Note
that the generic flow control (GFC) field is not involved. The GFC field
has only local significance
and cannot be communicated across the network
No comments:
Post a Comment
silahkan membaca dan berkomentar