Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 25 Next »

Table of contents

Introduction

The evolution of the data networks entails an increase in the volume of the transmitted traffic, which requires the usage of the quality of service policy. The implementation of the policy will allow the classification of the network traffic and the distribution of the network resources between different traffic classes.

Terminology

  • QoS (Quality of Service) - technology that allows to classify a data stream and to prioritize each stream's transmission in accordance with its class.
  • QoS policy - document describing the principles of traffic stream classification and the resource requirements for each class.
  • Traffic stream - the data of one service transmitted between two nodes.
  • Service - process running on end nodes. The data of a service is distinguished by a unique set of service field values within the network packet's structure. IP telephony, web, and video surveillance are examples of services.
  • Responsibility area - a network segment whose effective operation lays in the responsibility of a certain subject. A subject can be either a specific person or an organization.
  • DS domain (Differentiated Services domain) - a logical area having uniform traffic classification rules, defined by a QoS policy. Usually the DS domain coincides with the responsibility area.
  • CIR - Committed Information Rate. The system must guarantee the resource allocation in compliance with the CIR of the service.
  • MIR - Maximum Information Rate. In case that the CIR is ensured, the additional resources may be allocated to the services. The additional resources cannot exceed the MIR threshold and their allocation is not guaranteed.

Packet distribution scheme

In packet networks traffic is transmitted from the sending node to the receiving node through communication channels and intermediate devices. Generally a data packet is processed by each intermediate device independently. Let's look at the example of a data packet processing by an intermediate network device (Figure 1):

  1. Node-2 forms a data packet and transmits it to Medium-2. The data packet is encapsulated in L2-protocol frame used in Medium-2.
  2. The data frame is distributed in Medium-2. The frame is converted into a modulated signal in accordance with an environment physical properties. Signals used in wired and wireless environments are different, which affects their propagation properties and usage scenarios.
  3. The signal arrives at the input device interface; after demodulation, the received data frame is checked for integrity: damaged frames are discarded.
  4. The next stage - routing determines frames further path. If the frame is addressed to a Network device, it is passed for processing to internal services. If the frame is addressed to another node, two scenarios are possible: the frame must be passed further through the output interface, or discarded (if Medium-2 is a common environment, where all signals will be received by all devices connected to the medium. In accordance with the L2 protocols operational principles, if the receiver address in the frame header does not belong to the device, then the device should discard it).
  5. In case the frame should be processed and transferred to another node, it enters the packets queue. A packets queue is a set of buffers that contain data received by incoming interfaces. The number and size of memory buffers for the packets queue storage are not standardized and depend on the equipment manufacturer. For example, the InfiLINK 2x2 devices family has 32 queues, 17 of which are available for configuration by user.
  6. The data frame passes through the packets queue to which it was placed, and enters the outgoing interface.
  7. Since packets queues are a link between incoming and outgoing interfaces, a device should have a controller that fills the queues with incoming data and picks data from the queues for transmission to outgoing interfaces. Usually, these functions are performed by the central processing unit (CPU). As will be shown below, the filling and picking data from queues can be performed unevenly and depend on the data streams classification.
  8. The outgoing interface generates a modulated signal and transmits it to Medium-5 with connected Node-5, which is the receiver of the original data frame.
  9. Node-5 receives a signal, demodulates it, and processes the received data frame.

Note, in the modern network devices, network interfaces mostly are combined and can operate both as incoming and outgoing.


Figure 1 - A network device traffic passing scheme

A network device can be intermediate for several pairs of nodes, and each can transmit data of several services (Figure 2a). Let's look at the scheme there the Network device is intermediate for the traffic of nodes pairs Node-1 - Node-4, Node-2 - Node-5 and Node-3 - Node-6. The first pair transmits data of three services, the second of two, the third of one. If there are no QoS settings, the data of all services get through the general queue in the order they are received on the Network device and in the same order they will be transferred from the queue to outgoing interfaces.

With QoS configured, each of the incoming traffic flows can be classified by its type (for example) and map a separate queue to each class (Figure 2b). Each packet queue can be assigned a priority, which will be taken into account during packets extraction from queues, and will guarantee quality indicators. The traffic flows classification can be performed not only by the services used, but according to other criteria. For example, each pair of nodes can be assigned a separate message queue (Figure 2c).

Figure 2a - Queuing for various services without QoS

Figure 2b - Queuing for various services with QoS

Figure 2b - Queuing for various users with QoS

Keep in mind that several intermediate network devices can be located on the data path between the source and the receiver, with independent packets queues on each, i.e. effective QoS policy implementation will require the configuration of all network nodes.

Quality indicators

The main conclusions from the previous section, which will be used to define quality metrics:

  • The communication channel and network devices throughput is not endless.
  • The data delivery time from source to destination is non-zero.
  • A communication channel is a medium with a set of physical parameters that define the signal propagation effects.
  • The software and hardware network device architecture can affect the data distribution.

There are three main quality metrics:

  • Losses.
  • Delay.
  • Jitter.

Let's look at the metrics on the example: Node-2 transmits three data packets to Node-5, the data source and recipient are connected to an intermediate Network device, the packets are transmitted within the same service, i.e. their key service fields are the same.

Losses

During a data stream transmission, some packets may not be received, or may be received with errors. This process is called as data loss, and is measured as the number of received packets to the number of transmit packets ratio. In the example (Figure 3), Node-2 transmits packets with identifiers 1,2 and 3, however, Node-5 receives only packets 1 and 3, i.e. packet with identifier 2 was lost. There are network mechanisms which allows retransmitting lost data. For example, such mechanisms are the TCP and ARQ protocols.

The causes of data loss can be divided into the following groups:

  • Losses in the medium: losses related with signal propagation in the physical environment. For example, the frame will be lost if the useful signal level is lower than the receiver sensitivity. Losses can also be caused by physical damage of the interfaces connected to the media or by impulse pickups resulting from poor grounding.
  • Losses on the interface: losses during processing a queue on incoming or outgoing interface. Each interface has a memory buffer, which can be completely filled in case of intensive data stream. In this case, all subsequent data entering the interface will be discarded, because cannot be buffered.
  • Losses in the device: Data discarded by the network device according with configuration logic. If the queues are full and the incoming data cannot be added to the processing queue, the network device will drop it. Also, these losses include data packets rejected by access lists and a firewall.

Figure 3 - Data packet loss example

Losses affect two indicators: throughput and packet performance, which are not belong to basic.

Throughput

One of the main indicator practically used is a throughput, which value depends on losses. Throughput is defined by the physical channel capabilities and by the intermediate network devices ability to process the data stream. The link throughput is defined as the maximum amount of data that can be transmitted from a source to a receiver per unit time.

Packet performance

The parameter that affects a throughput and the queues state is a packet performance of the device. Packet performance is a maximum data packets amount of a given length that a device is capable to transmit per unit of time.

A real throughput depends on both packet performance and interface characteristics, therefore, at the network design stage, pay attention to these parameters coherence so avoid the situation when one of them becomes a bottleneck for a link and network segment.

The packet performance is defined by the hardware capabilities of the central processor and the internal memory amount. Network devices process multiple traffic streams with different L2 frame sizes, so the following Ethernet frame size values are used for a performance testing:

  • minimum size = 64 bytes;
  • medium size = 512 bytes;
  • maximum size = 1518 bytes.

Due to the limited internal memory amount, better packet performance is achieved for a minimum frame size. Minimal size frames using assume a large overhead amount: each data frames has an service header, which size does not depend on the size of the frame itself.

For example, the service header length for frames 64 bytes long (Figure 4b) and 156 bytes (Figure 4c) will be the same, but the user data amount will be different. To transmit 138 bytes of user data, three frames 64 bytes long or one frame 156 bytes long will be required, so in the first case 192 bytes are needed, in the second - 156 bytes. If link has the same throughput, large frames will increase efficiency by rising the useful throughput of the system. The Infinet devices performance values in various conditions is shown in the "Performance of the InfiNet Wireless devices" document.

Figure 4 - Examples of various lengths Ethernet frame structure

Delay

Delay is the data packet transmission time from a source to a receiver. The delay value consists of the following parts:

  • Signal propagation time in the medium: depends on the medium physical characteristics, is nonzero in any case.
  • Serialization time: the conversion of a bitstream to a signal and backward by the incoming/outgoing interfaces is not instantaneous and requires hardware resources of a network device.
  • Processing time: time the data packet spends is in the device. This time depends on the packet queues status, as a data packet will be processed only after processing packets placed in this queue earlier.

The delay is often measured, as a round-trip time (RTT), i.e. the time it takes for the data packet to be transmitted from a source to a destination and backward. For example, this value is used in the ping command results. The state of intermediate network devices during processing the data packets forward and backward may differ, therefore, usually the round-trip time is not equal to two one-way delays.

Figure 5 - Example of data transfer delay 

Jitter

CPU loading and the packets queues status on intermediate network devices are frequently changing, so the delay during data packets transmission can change. In the example (Figure 6), the transmission time of packages with identifiers 1 and 2 is different. The difference between the maximum and average delay values is called jitter.

Figure 6 - Example of floating delay in data transfer

In a redundant network infrastructure data between the source and the receiver can be transmitted in various ways, it will also lead to the jitter appearance. Sometimes the difference between the delays in the link may become so large that the transmitted data packets order will change on the receiving side (Figure 7). In the example, packets with identifiers were received in a different order.

The effect depends on the service characteristics and the ability to restore the original sequence by higher levels network protocols. For example, if the traffic of different services is transmitted through different paths, then it will not affect the disorder of the received data.

Figure 7 - Example of unordered data delivery 

Services requirements for quality indicators

Each of the data transfer services has a requirements set for quality indicators. The RFC 4594 document includes the following service types:

ServiceIndicator
LossesDelayJitter
Network Control
lowlowlow
Telephonyvery lowvery lowvery low
Signaling
lowlowlow
Multimedia Conferencingmediumvery lowlow
Real-Time Interactive trafficlowvery lowlow
Multimedia Streamingmediummediumlow
Broadcast videovery lowmediumlow
Low-Latency Data
lowmediumvery low
Managementlowmediummedium
High-Throughput Datalowhighhigh
Standardundefined
Low-Priority Datahighhighhigh
Application CategoriesService ClassSignaledFlow BehaviorG.1010 Rating
Application ControlSignalingNot applicableInelasticResponsive
Media-OrientedTelephonyYesInelasticInteractive
Real-Time InteractiveYesInelasticInteractive
Multimedia ConferencingYesRate AdaptiveInteractive
Broadcast VideoYesInelasticResponsive
Multimedia StreamingYesElasticTimely
DataLow-Latency DataNoElasticResponsive
High-Throughput DataNoElasticTimely
Low-Priority DataNoElasticNon-critical
Best EffortStandardNot SpecifiedNon-critical

QoS ensuring methods

The various services traffic transmission is performed on a single network infrastructure, which has limited resources, therefore, mechanisms should be provided for the resources distribution between services.

Let's look at the example (Figure 8), Node-2 generates several services traffic with a total speed of 1 Gbit/s, Medium-2 allows to transfer this data stream to an intermediate network device, however, the maximum link throughput between network device and Node-5 is 500 Mbps. Obviously, the data stream cannot be processed completely and part of this stream must be dropped. The QoS task is to make this drops manageable to provide the end services the required metric values. Of course, it is impossible to provide the required performance for all services, as links throughputs do not match, therefore, the QoS policy implementation involves that critical services traffic should be processed first.

Figure 8 - Example of inconsistency in incoming traffic amount and links throughputs

The example above allows shows two main methods used in the QoS policy implementation:

  • Prioritization: data distribution among queues and packets selection from queues by priority. In this case, packets most sensitive to delay and jitter are processed first, then traffic, for which the delay value is not critical.
  • Throughput limitation: throughput limitation for traffic flows. All traffic that exceeds the set throughput threshold will be discarded.

Let's look at the example above, and add the second intermediate device to the data distribution scheme (Figure 9a). The packets distribution scheme has the following steps:

  • Step 1:
    • Node-1 and Node-2 generate packets of two services: telephony and mail. The telephony traffic is sensitive to delay and jitter unlike mail service data (see Services requirements for quality indicators), therefore, it must be processed first by the intermediate devices.
    • Network device-1 receives Node-1 and Node-2 packets.
  • Step 2:
    • Traffic prioritization is configured on Network device-1, thus the device classifies incoming traffic and places data packets in different queues. Whole telephony traffic will fall in queue 0, and mail traffic in queue 16. Thus, the queue 0 priority is higher than queue 16.
    • Packets leave queues and proceed to outgoing interfaces in accordance with the queues priorities i.e. queue 0 will be emptied first, then queue 16.
  • Step 3:
    • Network device-1 sends data to Medium-7, connected with Network device-2. The data packets sequence corresponds to quality metrics - telephony data is transmitted to medium first, and the mail service next.
    • Node-3 is connected to Network device-2 and generates a mail service data stream.
  • Step 4:
    • The Network Device-2 has no prioritization settings, thus all incoming traffic falls in the queue 16. Data will leave queues in the same order they entered, i.e. telephony and mail service traffics will be handled equally, despite the requirements for the quality indicators.
    • Network device-2 increase a delay during the telephony traffic transmission.
  • Step 5:
    • Data are transmitted to the end nodes. The telephony packets transmission time was increased due to the Node-3 mail service traffic processing.

Each intermediate network device without traffic prioritization settings will increase the data transmission delay, and the delay amount is unpredictable. Thus, a large number of intermediate devices will make real-time services operation impossible because of the quality indicators unattainability, i.e. traffic prioritization must be done along the entire network traffic transmission path (Figure 9b).

Keep in mind that implementing QoS policies is only one component to ensure quality metrics. For maximum effect, the QoS configuration should be synchronized with other settings. For example, using TDMA technology instead of Polling on InfiLINK 2x2 and InfiMAN 2x2 family devices reduces jitter by stabilizing the delay value (see TDMA and Polling: Application features).

Figure 9a - Example of data distribution with partly implemented QoS policy

Figure 9b - Example of data distribution with implemented QoS policy

Traffic prioritization mechanism

Traffic transmission path from the management point of view in the network can be described in two ways (Figure 10a, b):

  • White-box: all network devices in the data propagation path are in the same responsibility zone. In this case, the QoS configuration on the devices can be synchronized, in accordance with the requirements described in the section above.
  • Black-box: some network devices in the data propagation path are in a external responsibility zone. The classification rules for incoming data and the algorithm for putting out packets from queues are configured individually on each device. The architecture of packets queues implementation depends on the equipment manufacturer, therefore there is no guarantee of the correct QoS configuration on devices in external responsibility zone, and as a result, there is no guarantee of the high-quality performance indicators.

Figure 10a - White-box structure example

Figure 10b - Black-box structure example

To solve the described problem for the black-box network structure, labeling the packet headers can be performed: the priority required for the packet processing is set in a header field and keeped over the whole path. In this case, all intermediate devices can put incoming data in the queue in accordance with the fields values in which the priority is indicated. It will require the standard protocols development and their implementation by equipment manufacturers.

Keep in mind, usually equipment located in an external responsibility zone does not support data prioritization in accordance with the priority values in the service headers. Traffic priority coordination at the responsibility zones border should be performed at the administrative level.

A processing priority for a packet can be set by the service fields of various network protocols. This article describes the use of Ethernet and IPv4 protocol headers.

Priority within an Ethernet (802.1p)

The Ethernet frame header includes the "User Priority" service field, which is used to prioritize data frames. The field has a 3 bits size, which allows to select 8 traffic classes: 0 class - the lowest priority, 7 class - the highest priority.Keep in mind that the "User Priority" field is only in 802.1q frames, i.e. tagged with the VLAN tag.

Figure 11 - Frame prioritization service field in Ethernet header

Priority within an IP

IP protocol has three historical stages in the development of the service field responsible for packets prioritization:

  1. When the protocol was approved, there was an 8-bit ToS (Type of Service) field in the IP packet header (see RFC 791). ToS included the following fields (Figure 12a):
    1. Precedence: priority value.
    2. Delay: delay minimization bit.
    3. Thorughput: throughput minimization bit.
    4. Reliability: reliability maximization bit.
    5. 2 bits with values are equal to 0.
  2. 8 bits were still used for packets prioritization, however, ToS had included the following fields (see RFC 1349):
    1. Delay.
    2. Throughput.
    3. Reliability.
    4. Cost: bit to minimize the cost metric (1 bit is used, which value was previously zero).
  3. IP header structure has been changed (see RFC 2474).The 8 bits previously used for prioritization were distributed in a following way (Figure 12b):
    1. DSCP (Differentiated Services Code Point): packet priority.
    2. 2 bits are reserved.

Thus, ToS allows to distinguish 8 traffic classes: 0 - the lowest priority, 7 - the highest priority, and DSCP - 64 classes: 0 - the lowest priority, 63 - the highest priority.

Figure 12a - ToS service field in IP packet header

Figure 12b - DSCP service field in IP packet header

Priority configuration

Many end nodes on the network do not support manipulation with service headers: can not set or remove the priority, so this functionality should be implemented on the intermediate network devices.

Let's look at the example of data transmission from Node-1 to Node-2 through a DS-domain and a third-party telecom operator network (Figures 13a-c). The DS domain includes three devices, two of them for the domain are borderline and one is intermediate. Lets look at the steps of processing data in a network using an Ethernet frame transmission (the basic principles discussed in the example are applicable for an IP packet or other protocol that supports data prioritization):

  • Step 1: Node-1 generates an Ethernet frame for Node-2. There is no field for frame priority tag in the header (Figure 13a).
  • Step 2: Border Network Device-1 changes the Ethernet header, setting the priority to 1. Border devices should have configured rules to select Node-1 traffic from the general stream so necessary priority is set only to these frames. In networks with a large traffic flows number, the list of rules on border devices can be volumetric. Border network device-1 processes the frame in accordance with the set priority, placing it in the corresponding queue. The frame is transmitted to the outgoing interface and sent to the Intermediate network device-2 direction (Figure 13a).
  • Step 3: Intermediate network device-2 receives an Ethernet frame with  priority 1, and places it in the corresponding queue. The device does not manipulate with setting/removing priority in the frame header. The frame is transmitted towards the Border network device-3 (Figure 13a).
  • Step 4: The border network device-3 processes the incoming frame similarly as Intermediate device-2 (see Step 3) and transmits it to the provider network (Figure 13a).
    • Step 4b: in case of agreement that traffic will be transmitted through the provider network with a priority other than 1, then the Border Device-3 must perform a priority change. In this example, the device changes the priority value from 1 to 6 (Figure 13b).
  • Step 5: during the frame transmission through the provider network, the devices are guided by the priority value in the Ethernet header (Figure 13a).
    • Step 5b: similarly to Step (Figure 13b).
    • Step 5c: in there is no agreement about frames prioritization in accordance with the priority value specified in the Ethernet header, a third-party service provider can apply its own QoS policy to traffic and set a priority that may not satisfy the QoS policy of the DS domain (Figure 13c).
  • Step 6: the border device in the provider network removes the priority field from the Ethernet header and passes it in the Node-2 direction (Figure 13a-c).

Figure 13a - Example of the Ethernet frame priority changing during transmission through two network segments (priority in segments is coordinated)

Figure 13b - Example of the Ethernet frame priority changing during transmission through two network segments (priority in segments is coordinated, but should be changed)

Figure 13c - Example of the Ethernet frame priority changing during transmission through two network segments (priority in segments is not coordinated)

Queues implementation in Infinet devices

For a device the process of analyzing priority in service headers and processing data in accordance with these priorities is not a simple task due to the following reasons:

  • Devices recognize priority automatically according to different protocols. For example, InfiLINK XG family devices support 802.1p priority and do not recognize DSCP priority values.
  • Devices that are borderline for the DS domain allow to use a different set of criteria to classify traffic. For example, InfiMAN 2x2 devices allow to set priority by selecting all TCP traffic directed to port 23, Quanta 5 family devices do not.
  • The queues number implemented in devices is different and depends on manufacturer. A correspondence table is used to set a relation between the priority in the service header and the device internal queue.

The tables below show data on the queues internal architecture, the priorities managing possibilities and the relation between the protocol and internal priorities values.

Note the queuing architectural feature of Infinet devices: all queues share a single memory buffer. In case the traffic falls into one queue, its size will be equal to the size of the buffer, if there will be several queues, the size of the memory buffer will be evenly divided between them.

Table of packets internal queuing
ParameterDescriptionInfiLINK 2x2 / InfiMAN 2x2InfiLINK XG / InfiLINK XG 1000Quanta 5 / Quanta 70
Marking criteriaA criteria set that can be used to classify incoming traffic.

PCAP expressions support

(PCAP expressions allow flexible filtering based on any service header field, see PCAP filters article)

vlan-idvlan-id
Auto recognitionFor these protocols, the device family automatically recognizes the priority set in the header and put the data in the appropriate queue.

RTP

802.1p

IPIP/GRE tunnels

MPLS

DSCP

ToS

ICMP

TCP Ack

PPPoE

802.1p802.1p
Queues numberThe data queues number used in the device.1748
Queues managementSupported mechanisms for picking packets from queues.

Strict

Weighted

QoS configuration via WebDocumentation on traffic prioritization configuring through the Web interface.

QoS options

Traffic Shapping

Configuring QoS

Switch

Configuring per-VLAN

Switch settings
QoS configuration via CLIDocumentation on traffic prioritization configuring through the command line interface.
qm commandCommands for switch configuration-
Correspondence table of protocols and internal priorities for InfiLINK 2x2, InfiMAN 2x2 family devices
Traffic class (in accordance with MINT)InfiLINK 2x2, InfiMAN 2x2802.1pToS (Precedence)DSCP
Background1601

Regular best effort1500000
Business 614
018, 10
Business 513

12, 14
Business 412
0216, 18
Business 311

20, 22
Business 210
0324, 26
Business 1902
28, 30
QoS 48
0432
QoS 37

34
QoS 26

36
QoS 1503
38
Video 24040540, 42
Video 13

44, 46
Voice2050648, 50
Control106
52, 54
NetCrit0070756, 58, 60, 62
Correspondence table of protocols and internal priorities for InfiLINK XG, InfiLINK XG 1000, Quanta 5, Quanta 70 family devices
Traffic class (in accordance with 802.1p)802.1pInfiLINK XG, InfiLINK XG 1000Quanta 5, Quanta 70
Background (lowest priority)0010
Best Effort011
Excellent Effort0222
Critical Applications033
Video0434
Voice055
Internetwork Control0646
Network Control (higher priority)077

Queues management

Prioritization assumes the use of several packes queues, which content must be transmitted to outgoing interfaces through a common bus. Infinet devices support two mechanisms for packets transmission from queues to the bus: strict and weighted scheduling.

Strict scheduling

The strict prioritization mechanism assumes sequential queues emptying in accordance with priority values. Packets with priority 2 will only be sent after all packets with priority 1 will be transferred to the bus (Figure 14). After packets with priorities 1 and 2 are sent, the device will start sending packets with priority 3.

The lack of this mechanism is that resources will not be allocated to low-priority traffic if there are packets in higher priority queues, it will lead to the complete inaccessibility of some network services.

Figure 14 - Strict packets scheduling

Weighted scheduling

Weighted scheduling doesn't have disadvantages of strict scheduling. Weighted scheduling assumes the resources allocation between all queues in accordance with weighting factors that correspond to priority values. If there are three queues (Figure 15), weighted factors can be distributed in the following way:

  • packets queue 1: weight = 3;
  • packets queue 2: weight = 2;
  • packets queue 3: weight = 1.

When using weighted scheduling, each queue will receive resources, i.e. there will be no situation with the complete inaccessibility of some network service.

Figure 15 - Weighted packets scheduling

Traffic prioritization recommendations

Universal recommendations for configuration of the traffic prioritization mechanisms:

  • Pay special attention to QoS policies developing. The policy should take into account the traffic of all services used in the network, provide strict compliance between service and the traffic class.
  • The QoS policy should take into account the devices technical capabilities for recognizing and manipulating the service fields values, which indicate the data priority.
  • The rules for classifying traffic flows must be configured on the DS domain border devices.
  • DS domain intermediate devices should automatically recognize traffic priorities.

Throughput limitation mechanism 

The network resources distribution between traffic flows can be performed not only by prioritization, but also using the throughput limitation mechanism. In this case, the stream bitrate cannot exceed the threshold level set by the network administrator.

The speed limitation principle in Infinet devices

The throughput limitation principle is to constantly measure the data stream intensity and apply the restrictions if the intensity value exceeds the set threshold (Figure 16a,b). The throughput limitation in Infinet devices is performed in accordance to the Token Bucket algorithm, all data packets above the throughput threshold are discarded. As a result the losses described above appears.

Figure 16a - Graph of unlimited data flow rate

Figure 16b - Graph of limited data flow rate

Token Bucket Algorithm

There are logical buffer for each speed limit rule containing allowed for transfer data amount. Usually, the buffer size is larger than the limitation size. Each unit of time to such buffer is allocated a data size equal to the set threshold of the bitrate limit.

In the example (video 1), the speed limit is 3 data units, the buffer size is 12 data units. The buffer is constantly replenished in accordance with the threshold, however, it cannot be filled over its own volume.

Video 1 - Resource allocation to speed limit buffer

Data received by the device inbound interface will be processed only if the buffer contains resources for their processing (video 2). Thus, the passing data empties the buffer resource. If the buffer is empty at the time of data arrival, the data will be discarded.

Video 2 - Dedicated resources usage for data processing

Keep in mind that the resources allocating to the buffer and data processing are performed simultaneously (video 3).

The data flows intensity in packet networks is inconsistent, it allows to demonstrate the advantages of the Token Bucket algorithm. Time intervals in which data are not transmitted allow to accumulate resources in the buffer, and then process the data amount that exceeds the threshold. A wide band will be allocated to pulse data streams, such as web traffic, to ensure quick web pages loading, and increase the end user comfort level.

Despite the described advantage of the Token Bucket algorithm, the average throughput will fit to the set threshold, as in the long time period, the resources amount will be determined not by the size of the buffer, but by the intensity of its filling, which is equal to the throughput threshold.

Video 3 - Data processing by the speed limit buffer

The Token Bucket algorithm can be applied to separated traffic flows, in this case, the speed limit buffer will be allocated for each flow (video 4).

In this example, two speed limit rules are implemented: for vlan 161 traffic - 3 data units per time block, for vlan 162 traffic - 2 data units. The buffer size for each traffic flow contains 4 time intervals, i.e. 12 data units for vlan 161 traffic and 8 data units for vlan 162 traffic. Totally 5 data units are allocated to the buffers in each time interval, then the allocated resources are distributed between the buffers. Since the buffers size is limited, resources which exceeds their size cannot be used.

Video 4 - Resources allocation for two speed limit buffers

The each buffer resources can only be used for traffic of the corresponding service (video 5). Thus, to handle vlan 161 traffic, a resources of buffer for vlan 161 traffic are used. Similarly, buffer resources are used for vlan 162 traffic.

Video 5 - Dedicated resources for data processing using 

There are ways to connect resource buffers with each other. For example, on Infinet devices, allocated resource buffers can be connected via classes (see below). If one resource buffer is full (video 6), its resources can be provided to another buffer.

In the example, the buffer for vlan 162 traffic is full, it allows to fill in the vlan 161 traffic buffer with 5 selected data units, instead of 3. In this case, the vlan 161 service throughput will increase. But when vlan 162 traffic resource buffer will have free space, resource allocation will return to normal mode: for vlan 161 traffic - 3 data units, for vlan 162 traffic - 2 data units.

Video 6 - Allocated resources redistribution between various services traffic limitation buffers

Types of speed limits in Infinet devices

The throughput limitation principle described above is implemented in Infinet devices in two ways:

  • Physical interface traffic shaping: limitations will be applied to the whole data flows traffic passing through the physical interface. This method is easy to configure - specify the interface and threshold value, but it does not allow to apply limitations to the specific network service traffic.
  • Traffic flow shaping: limitations are applied to logical data flow. The logical data stream is separated from the main traffic by the specified criteria. It allows to apply throughput limitations to the network services traffic, which are separated by the service header fields values. For example, traffic with vlan 42 can be separated to the logical channel and limited by throughput without influencing other traffic flows.

Infinet devices allow to configure hierarchical throughput allocation structures. Two objects types are used to perform this: a logical channel and a class, which are connected by a child-parent relationship. The class has throughput value distributed between the child logical channels, and the channel has guaranteed and maximum throughput values - CIR and MIR.

Let's look at the example of two services traffic transmission associated with vlan id's 161 and 162, between Master and Slave (Figure 17a). Totally the service traffic should not exceed 9 Mbps.

The Master device configuration can be performed in a following way (Figure 17b):

  • Class 16 has been configured with 9 Mbps throughput.
  • Class 16 is parent to channels 161 and 162, i.e. the total traffic at these logical channels is limited to 9 Mbps.
  • Traffic with vlan 161 ID is associated with logical channel 161, vlan 162 is associated with logical channel 162.
  • CIR values for channel 161 is 4 Mbps, for channel 162 is 5 Mbps. If both services will actively exchange data, the threshold values for their traffic will be the equal CIR for each channel.
  • MIR values for channel 161 is 9 Mbps, for channel 162 is 7 Mbps. If there is no traffic in logical channel 162, then the threshold value for channel 161 will be equal to MIR, i.e. 9 Mbps. Otherwise, the threshold value for channel 162 will be equal to 7 Mbps.

Figure 17a - Example throughput limit for traffic with vlan-id's 161, 162

Figure 17b - Hierarchical channel structure of throughput limits for vlan's 161 and 162 traffic

The throughput limitation capabilities of all Infinet devices families are shown in the table below:

Table of throughput limitation capabilities in Infinet devices
ParameterDescriptionInfiLINK 2x2 / InfiMAN 2x2InfiLINK XG / InfiLINK XG 1000
Interface shapingThe throughput limitation capabilities for the device physical interface.-
  • GE0
  • GE1
  • SFP
  • mgmt
Logical stream shapingThe throughput limiation capability for a traffic stream divided according to one or more criteria.up to 200 logical channels-
Traffic directionsAbility to apply limitations to incoming/outgoing traffic flows.incoming and outgoingoutgoing
Limitations hierarchy
The ability to create a system of mutual hierarchical limitations.

up to 200 channels, which are the childish to the logical channels

-
Logical streams rulesCriteria used to divide data streams.

PCAP expressions support

(PCAP expressions allows to perform a flexible limitation based on any service header fields, see PCAP filters article)

-
Shaping in WebDocumentation about throughput limitation settings in the Web interface.
Traffic shapingSwitch
Shaping in CLIDocumentation about throughput limitation settings via CLI.
qm commandCommands for switch configuration

Recommendations for throughput limitation configuration 

Use the following recommendations during data throughput limitation configuring:

  • Traffic of all network services should be limited. It allows to take control over all traffic flows and consciously allocate resources for these flows.
  • Throughput limitation should be performed on devices closest to the data source. There is no need to duplicate throughput limiting rules for data flow throughout the chain of intermediate devices.
  • Many network services are bidirectional, it requires restrictions on devices to both incoming and outgoing traffic.
  • To set the correct throughput threshold values, evaluate firstly the average and maximum values of the services traffic. Pay special attention to the most busy hours. Collect data for analysis is possible via the InfiMONITOR monitoring system.
  • The CIR values sum of the logical channels associated with one class should not exceed the maximum class throughput.

Additional materials

White papers

  1. TDMA and Polling: Application features.
  2. Performance of the Infinet Wireless devices.

Webinars

  1. QoS policies configuration in Infinet Wireless devices.

Videos

  1. Quality of Service With Infinet Wireless Units.

Others

  1. RFC 4594.
  2. RFC 791.
  3. RFC 1349.
  4. RFC 2474.
  5. InfiMONITOR monitoring system.
  6. InfiLINK 2x2, InfiMAN 2x2 family devices web interface. QoS options.
  7. InfiLINK 2x2, InfiMAN 2x2 family devices web interface. Traffic shaping.
  8. InfiLINK XG, InfiLINK XG 1000 family devices web interface. Configuring QoS.
  9. Quanta 5 family devices web interface. Switch settings.
  10. Quanta 70 family devices web interface. Switch settings.
  11. QoS configuration in OS WANFleX.
  • No labels