Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Include Page
_IW Academy
_IW Academy

Table of Contents
maxLevel3
excludeСписок документов|Содержание

Introduction

The evolution of the data networks evolution entails an increase in the volume of the transmitted traffic, which requires the usage of a quality of service policy usage. The implementation of the policy implementation will allow classifying the classification of the network traffic and distributing the distribution of the network resources between different traffic classes.

Terminology

  • QoS (Quality of Service) - technology that allows to classify a data stream and to prioritize the each stream's transmission in accordance with its class.
  • QoS policy - document describing the principles of traffic streams stream classification and the resource requirements for each class.
  • Traffic stream -  data the data of one service transmitted between two nodes.
  • Service - process running on end nodes which data need to be transmitted between nodes. Data of one . The data of a service is distinguished by a unique set of service field values and within the network packets packet's structure. IP telephony, web, and video surveillance are the examples of services.
  • Responsibility area - a network segment which whose effective operation is lays in the responsibility of a certain subject. A subject can be either a specific person or an organization.
  • DS domain (Differentiated Services domain) - a logical area having uniform traffic classification rules, defined by a QoS policy. Usually the DS domain coincides with the responsibility area.
  • CIR - Committed Information Rate. The system guarantees must guarantee the resources resource allocation in compliance with the CIR for of the service.
  • MIR - Maximum Information Rate. In case that the CIR is ensured, the additional resources may be allocated to the services.  Additional The additional resources cannot exceed the MIR threshold and their allocation is not guaranteed.

Anchor
packet_life
packet_life

...

Packet distribution scheme

In packet networks, the traffic is transmitted from the sending node to the receiving node through communication channels and intermediate devices. Generally a data packet is processed by each intermediate device independently. Let's look at the an example of a data packet processing performed by an intermediate network device (Figure 1):

  1. Node-2 forms generates a data packet frame and transmits it to EnvironmentMedium-2. The data packet is encapsulated in a frame based on the L2-protocol frame that is used in EnvironmentMedium-2.
  2. The data frame is distributed in EnvironmentMedium-2. The : the frame is converted into a modulated signal in accordance with an environment according to the physical properties of the environment.  Signals The signals used in wired and wireless environments are different , which and this affects their propagation properties and their usage scenarios.
  3. The signal arrives at the input device interfacethe incoming network interface of the intermediate network device; after demodulation, the received data frame is checked for integrity: the damaged frames are discarded.
  4. The next stage - routing determines frames further Next, the frame must be processed by the switching module in order to determine its path. If the frame is addressed to a Network this intermediate network device, it is will be passed for processing to the internal services. If the frame is addressed to another node, two scenarios are possible: the frame must be is passed to further through processing until it reaches the output interface, or it is discarded (if EnvironmentMedium-2 is a common environment, where all signals will be received by all devices connected to the environment. In accordance with medium, according to the L2 protocols protocol's operational principles, if the receiver destination address in the frame's header does not belong to the device, then the device should discard it).
  5. In case that the frame should be processed and transferred to another node, it enters the packets before exiting the device it will be placed into a packet queue. A packets packet queue is a set of buffers that contain the data received by the incoming interfaces. The number and size of the memory buffers used for the packets queue packet storage are not standardized and depend on the equipment's manufacturer. For example, the InfiLINK 2x2 family of devices family has 32 queues, 17 of which are available for configuration by to the user.
  6. The data frame passes through the packets packet queue to which it was placed, assigned and enters arrives at the outgoing interface.
  7. Since packets packet queues are a link between incoming and outgoing interfaces, a device should have a controller that fills the queues with the incoming data and picks data from the queues for transmission to the outgoing interfaces. Usually, these functions are performed by the central processing unit (CPU). As it will be shown below, the filling and picking the emptying of data into and from the queues can be performed unevenly and depend depends on the classification of the data streams classification.
  8. The outgoing interface generates a modulated signal and transmits it to EnvironmentMedium-5 with which is connected to Node-5, which is the receiver destination of the original data frame.
  9. Node-5 receives a the signal, demodulates it , and processes the received data frame.

Note , that in the modern network devices, the network interfaces mostly are usually combined and can operate operate both as both incoming and outgoing.

Center


Image RemovedImage Added

Figure 1 - A Traffic passing through an intermediate network device traffic passing scheme

A network device can be intermediate for several pairs of nodes , and each node can transmit the data of several services (Figure 2a). Let's look at the a scheme there where the "Network device" is an intermediate node for the traffic coming from the following pairs of nodes pairs : Node-1 - Node-4, Node-2 - Node-5 and Node-3 - Node-6. The first pair transmits data of for three services, the second of for two , and the third of for one service. If there are no QoS settings, the data of all services get through the general queue in the order they are received on at the "Network device" and in the same order they will be transferred from the queue to the outgoing interfaces.

With QoS configured, each of the incoming traffic flows can be classified by its type (for example) and map a separate queue can be mapped to each class (Figure 2b).  КEach Each packet queue can be assigned a priority, which will be taken into account during while extracting the packets extraction from the queues, and will guarantee specific quality indicators. The traffic flows flow classification can be performed not only by with respect to the services used, but according to other criteria also. For example, each pair of nodes can be assigned to a separate message packet queue (Figure 2c).

Center

Image RemovedImage Added

Figure 2a - Queuing for various services without QoS

Image RemovedImage Added

Figure 2b - Queuing for various services with QoS

Image RemovedImage Added

Figure 2b - Queuing for various users with QoS

Keep in mind that several intermediate network devices can be located on the data path between the source and the receiver, with having independent packets packet queues on each, i.e. an effective QoS policy implementation will require the configuration of all several network nodes.

Anchor
quality_metrics
quality_metrics
Quality indicators

The main conclusions from the previous section, which will be used to define the quality metrics, are the following:

  • The communication channel and throughput of the communication channel and of the network devices throughput is finitelimited.
  • The data delivery time from source to destination is non-zero.
  • A communication channel is an environment with a medium with a set of physical parameters that define can have influence on the signal propagation effects.
  • The software and hardware network device architecture can affect the data distributionarchitecture of the network devices impacts the way in which the data is being transmitted.

There are three main quality metrics:

  • Losses.
  • Delay.
  • Jitter.

Let's look at the metrics on the each metric using an example: Node-2 transmits three data packets to Node-5, ; the data source and the recipient are connected to an intermediate Network device , and the packets are transmitted within part of the same service, i.e. their key service fields are the same.

...

During a data stream transmission, some packets may not be received, or may be received with errors. This process is called as data loss , and it is measured defined as the ratio between the number of received packets to and the number of transmit transmitted packets ratio. In the example below (Figure 3), Node-2 transmits packets with the identifiers 1, 2 and 3, however, Node-5 receives only packets 1 and 3, i.e. the packet with the identifier 2 was lost. There are network mechanisms which allows retransmitting allow the retransmission of the lost data.  For example, Examples of such mechanisms are the TCP and the ARQ protocols.

The causes of data loss can be divided into the following groups:

  • Losses in the medium: losses related with the propagation of the signal propagation in the physical environment. For example, the frame will be lost if the useful signal level is lower than the receiver sensitivity. Losses can also be caused by the physical damage of the interfaces connected to the media or by impulse pickups resulting from poor grounding.
  • Losses on the interface: losses during while processing a queue on at the incoming or at the outgoing interface. Each interface has a memory buffer, which can be completely filled in case of intensive data stream transmissions. In this case, all the subsequent data entering the interface will be discarded, because it cannot be buffered.
  • Losses in inside the device: Data discarded by the network device according with to the logic of the configuration logic. If the queues are full and the incoming data cannot be added to the processing queue, the network device will drop it. Also, these losses include the data packets rejected by access lists and a by the firewall.
Center

Image RemovedImage Added

Figure 3 - Data packet loss example

Losses The losses affect two indicators: throughput and packet performance, which are not belong to basic.

Throughput

One of the main indicator that is practically used is a the throughput, which whose value depends on the losses.  Throughput The throughput is defined by the capabilities of the physical channel capabilities and by the ability of the intermediate network devices ability to process the data stream. The link throughput is defined as the maximum amount of data that can be transmitted from a the source to a the receiver per unit of time.

Packet performance

The parameter that affects a the throughput and the state of the queues state is a the packet performance of the device. Packet performance is a the maximum number of data packets amount of a given length that a device is capable to transmit process per unit of time.

A The real throughput depends on both packet performance and on the interface's characteristics, therefore, at the network design stage, pay attention to these parameters coherence so the coherence of these parameters in order to avoid the situation when one of them becomes a bottleneck for a link and or for a network segment.

The packet performance is defined by the hardware capabilities of the central processor and by the amount of internal memory amount. Network devices process multiple traffic streams with different L2 frame sizes, so the following Ethernet frame size values are used for a performance testingtest:

  • minimum size = 64 bytes;
  • medium size = 512 bytes;
  • maximum size = 1518 bytes.

Due to the limited amount of internal memory amount, better packet performance is achieved for a the minimum frame size.  Minimal size frames using assume Using minimum sized frames assumes a large amount of overhead amount: since each data frames frame has an a service header, which whose size does not depend on the size of the frame itself.

For example, the service header length for frames 64 bytes long frames (Figure 4b) and 156 bytes frames(Figure 4c) will be the same, but the user data amount will be different. To transmit 138 bytes of user data, three frames 64 bytes long frames or one frame 156 bytes long frame will be required, so in the first case 192 bytes are neededsent, in the second - only 156 bytes. If link has the same  For a link having a fixed throughput, large frames will increase the efficiency by rising the useful throughput of the system, but the latency will also increase. The performance of the Infinet devices performance values in various conditions is shown in the "Performance of the InfiNet Infinet Wireless devices" document.

Center

Image RemovedImage Added

Figure 4 - Examples of Frame structure for various lengths Ethernet frame structurelengths

Delay

Delay is the data packet transmission time from a source to a receiver. The delay value consists of the following parts:

...

defined as the time it takes for a packet to travel from the source to the destination. The value of the delay depends on the following aspects:

  • The signal's propagation duration in the medium: depends on the medium physical characteristics , of the medium and it is nonzero in any case.
  • Serialization time: the conversion of a bitstream to a signal and backward by the incoming/outgoing interfaces is not instantaneous and requires makes use of the hardware resources of a the network device.
  • Processing time: the time spent by the data packet spends is in inside the network device. This time depends on the status of the packet queues statusqueue, as a data packet will be processed only after processing the packets placed earlier in this queue earlier.

The delay is often measured, as a round-trip time (RTT), i.e. the time it takes for the data packet to be transmitted from a the source to a the destination and backward. For example, this value is used can be seen in the ping command's results. The state of time it takes for the intermediate network devices during processing to process the data packets forward and backward may differ, therefore, usually the round-trip time is not equal to two the double of the one-way delaysdelay.

Center

Image RemovedImage Added

Figure 5 - Example of data transfer delay 

Jitter

The CPU loading load and the status of the packets packet queues status on intermediate network devices are frequently changing at the intermediate network devices, so the delay during the data packets packet transmission can changewill vary. In the example below (Figure 6), the transmission time of packages for the packets with the identifiers 1 and 2 is different. The difference between the maximum and the average delay values is called jitter.

Center

Image RemovedImage Added

Figure 6 - Example of floating varying delay in data transfer

In When using a redundant network infrastructure the data between the source and the receiver can be transmitted in various ways, it will also lead to the jitter appearancethrough different paths, so jitter will occur. Sometimes the difference between the delays in the link on each path may become so large that the order of the transmitted data packets order will change on the receiving side (Figure 7). In the example below, the packets with identifiers were received in a different order.

The effect depends on the characteristics of the service characteristics and on the ability of the higher layer network protocols to restore the original sequence by higher levels network protocols. For example. Usually, if the traffic of different services is transmitted through different paths, then it will should not affect the disorder ordering of the received data.

Center

Image RemovedImage Added

Figure 7 - Example of unordered data delivery delivery

Anchor
service_class
service_class

...

Service requirements with respect to the quality indicators

Each of the data transfer services has a set of requirements set for the quality indicators. The RFC 4594 document includes the following service types:

Center
ServiceIndicator
LossesDelayJitter
Network Control
lowlowlow
Telephonyvery lowvery lowvery low
Signaling
lowlowlow
Multimedia Conferencingmediumvery lowlow
Real-Time Interactive trafficlowvery lowlow
Multimedia Streamingmediummediumlow
Broadcast videovery lowmediumlow
Low-Latency Data
lowmediumvery low
Managementlowmediummedium
High-Throughput Datalowhighhigh
Standardundefined
Low-Priority Datahighhighhigh



Expand
titleIn accordance to RFC 4594
Center
Application CategoriesService ClassSignaledFlow BehaviorG.1010 Rating
Application ControlSignalingNot applicableInelasticResponsive
Media-OrientedTelephonyYesInelasticInteractive
Real-Time InteractiveYesInelasticInteractive
Multimedia ConferencingYesRate AdaptiveInteractive
Broadcast VideoYesInelasticResponsive
Multimedia StreamingYesElasticTimely
DataLow-Latency DataNoElasticResponsive
High-Throughput DataNoElasticTimely
Low-Priority DataNoElasticNon-critical
Best EffortStandardNot SpecifiedNon-critical



QoS

...

methods

The various transmission of the various services traffic transmission is performed on a single network infrastructure, which has limited resources, therefore, mechanisms should be provided for distributing the resources distribution between the services.

Let's look at the example below (Figure 8), . Node-2 generates several traffic serving different services traffic with a total speed of 1 Gbit/s, Environment. Medium-2 allows to transfer this data stream to an intermediate network device, however, the maximum link throughput between network the Network device and Node-5 is 500 Mbps. Obviously, the data stream cannot be processed completely and part of this stream must be dropped. The QoS task is to make this these drops manageable in order to provide the end services the required metric values for the end services. Of course, it is impossible to provide the required performance for all the services, as links throughputs do the throughput does not match, therefore, the QoS policy implementation involves that the traffic of the the critical services traffic should be processed first.

Center

Image RemovedImage Added

Figure 8 - Example of inconsistency in between the incoming traffic amount and links throughputs

...

the link throughput

Two main methods used in during the QoS policy implementation can be highlighted:

  • Prioritization: data distribution among  the distribution of the data packets into queues and the extraction of the packets selection from the queues by their priority. In this case, the packets that are most sensitive to delay and jitter are processed first, then the traffic , for which the delay value is not critical is processed.
  • Throughput limitation: throughput limitation for the traffic flows. All the traffic that exceeds the set throughput threshold will be discarded.

Let's look at the example above, and add the a second intermediate device to the data distribution scheme (Figure 9a). The packets packet distribution scheme has follows the following next steps:

  • Step 1:
    • Node-1 and Node-2 generate packets of for two services: telephony and mail. The telephony traffic is sensitive to delay and jitter unlike the mail service data (see Services requirements for quality indicators), therefore, it must be processed first by the intermediate devices.
    • Network device-1 receives the packets of Node-1 and of Node-2 packets.
  • Step 2:
    • Traffic prioritization is configured on Network device-1, thus the device classifies the incoming traffic and places the data packets in different queues.  Whole telephony All the voice traffic will fall be put in queue 0, and the mail traffic will be put in queue 16. Thus, the priority of queue 0 priority is higher than the one of queue 16.
    • Packets The packets leave the queues and proceed to towards the outgoing interfaces in accordance with the queues queue priorities i.e. queue 0 will be emptied first, then queue 16 will be emptied.
  • Step 3:
    • Network device-1 sends data to EnvironmentMedium-7, which is connected with Network device-2. The data packets sequence corresponds of data packets corresponds to the quality metrics - the telephony data is transmitted to environment first through the medium, and the mail service is sent next.
    • Node-3 is connected to Network device-2 and generates a mail service data stream.
  • Step 4:
    • The Network Device-2 has no prioritization settings, thus all the incoming traffic falls is put in the queue 16.  Data The data will leave the queues in the same order they that it entered, i.e. the telephony and the mail service traffics services will be handled equally, despite the requirements for of the quality indicators.
    • Сетевое устройство-2 вносит задержку во время распространения трафика телефонииNetwork device-2 increases the delay for the telephony traffic transmission.
  • Step 5:
    • Data are The data is transmitted to the end nodes. The telephony packets transmission time of the voice packets was also increased due to the additional processing of the mail service traffic of Node-3 mail service traffic processing.

Each intermediate network device without traffic prioritization settings will increase the data transmission delay, and so the value of the delay amount is unpredictable. Thus, having a large number of intermediate devices without QoS policies implemented, will make the real-time services's operation impossible because of the mismatch with the quality indicators unattainability, i.e. traffic prioritization must be done performed along the entire network traffic transmission path (Figure 9b).

Keep in mind that implementing QoS policies is the only one component method to ensure the quality metrics. For maximum an optimal effect, the QoS configuration should be synchronized with other settings. For example, using the TDMA technology instead of Polling on the InfiLINK 2x2 and InfiMAN 2x2 family families of devices reduces jitter by stabilizing the value of the delay value  (see TDMA and Polling: Application features).

Center

Image RemovedImage Added

Figure 9a - Example of data distribution with partly implemented QoS policypolicies

Image RemovedImage Added

Figure 9b - Example of data distribution with implemented QoS policypolicies

...

The traffic prioritization mechanism

Traffic transmission path from From the management point of view in , the transmission path through the network can be described in two ways (Figure 10a, b):

  • White-box: all the network devices in along the data propagation path are in the same responsibility zone. In this case, the QoS configuration on the devices can be synchronized, in accordance with according to the requirements described specified in the section above.
  • Black-box: some network devices in the data propagation path are in a part of an external responsibility zone. The classification rules for incoming data and the algorithm for putting out packets from emptying the queues are configured individually on each device. The architecture of packets the packet queues's implementation depends on the manufacturer of the equipment manufacturer, therefore there is no guarantee of the a correct QoS configuration on the devices in the external responsibility zone, and as a result, there is no guarantee of the high-quality performance indicators.
Center

Image RemovedImage Added

Figure 10a - White-box structure example

Image RemovedImage Added

Figure 10b - Black-box structure example

To solve the described problem for of the black-box network structure, labeling the packet headers can be performedlabeled: the priority required for the during packet processing is set in a header field and keeped is kept over the whole path. In this case, all intermediate devices can put the incoming data in the queue in accordance with the fields a queue according to the field values in which the priority is indicated. It will require the  This requires the development of standard protocols development and their the implementation of these protocols by the equipment manufacturers.

Keep in mind that usually, usually the equipment located in an external responsibility zone does not support data prioritization in accordance with the priority values in the service headers. Traffic priority coordination should be performed at the border of the responsibility zones border should be performed , at the administrative level, by implementing additional network configuration settings.

A The processing priority for of a packet can be set by using the service fields of various network protocols. This article describes the use of the Ethernet and of the IPv4 protocol headers.

...

Ethernet (802.1p) frame prioritization

The Ethernet frame header includes the "User Priority" service field, which is used to prioritize the data frames. The field has a size of 3 bits size, which allows to select 8 traffic classes: 0 class - the lowest priority class, 7 class - the highest priority class. Keep in mind that the "User Priority" field is present only in 802.1q frames, i.e. tagged with the VLAN tagframes using VLAN tagging.

Center

Image RemovedImage Added

Figure 11 - Frame prioritization service field in the Ethernet header

...

IP packet prioritization

The IP protocol has three historical stages in the development of the service field responsible for packets packet prioritization:

  1. When the protocol was first approved, there was an 8-bit ToS (Type of Service) field in the IP packet header (see RFC 791). ToS included the following fields (Figure 12a):
    1. Precedence: priority value (3 bits).
    2. Delay: delay minimization bit.
    3. ThorughputThroughput: throughput minimization bit.
    4. Reliability: reliability maximization bit.
    5. 2 bits with values are equal to 0.
  2. In the second stage, 8 bits were still used for packets packet prioritization, however, ToS had included the following fields (see RFC 1349):
    1. Delay.
    2. Throughput.
    3. Reliability.
    4. Cost: bit to minimize the cost metric (1 bit is used, which whose value was previously zero).
  3. Last, the IP header structure has been changed (see RFC 2474).The 8 bits previously used for prioritization were distributed in a the following way (Figure 12b):
    1. DSCP (Differentiated Services Code Point): packet priority (6 bits).
    2. 2 bits are reserved.

Thus, ToS allows to distinguish 8 traffic classes: 0 - the lowest priority, 7 - the highest priority, and DSCP - 64 classes: 0 - the lowest priority, 63 - the highest priority.

Center

Image RemovedImage Added

Figure 12a - ToS service field in the IP packet header

Image RemovedImage Added

Figure 12b - DSCP service field in the IP packet header

Priority configuration

Many end nodes on in the network do not support manipulation with the handling of the service headers: can not set or remove the priority, so this functionality should be implemented on the corresponding intermediate network devices.

Let's look at the example of a data transmission from Node-1 to Node-2 through a DS-domain and through a third-party telecom operator's network (Figures 13a-c). The DS domain includes three devices, two of them for are located at the domain are borderline and one is an intermediate device. Lets  Let's look at the steps of taken for data processing data in a network using an Ethernet frame transmission (the basic principles discussed in the example below are applicable for an IP packet or other protocol that supports data prioritization):

  • Step 1: Node-1 generates an Ethernet frame for Node-2. There is no field present for frame priority tag tagging in the header header (Figure 13a).
  • Step 2: Border Network Device-1 changes the Ethernet header, setting the priority to 1. Border devices should have rules configured rules to select in order to filter the traffic of Node-1 traffic from the general stream so necessary priority is set only to these framesand to assign a  priority for it. In networks with a large traffic flows flow number, the list of rules on border devices can be volumetric. Border network device-1 processes the frame in accordance with according to the set priority, placing it in the corresponding queue. The frame is transmitted to towards the outgoing interface and sent to the Intermediate network device-2 direction  (Figure 13a).
  • Step 3: Intermediate network device-2 receives an the Ethernet frame with  having priority 1, and places it in the corresponding priority queue. The device does not manipulate with setting/removing priority in handle the priority in terms of changing or removing it inside the frame header. The frame is next transmitted towards the Border network device-3 (Figure 13a).
  • Step 4: The border Border network device-3 processes the incoming frame similarly as to the Intermediate device-2 (see Step 3) and transmits forwards it to towards the service network provider network (Figure 13a).
    • Step 4b4a: in case of agreement agreeing that the traffic will be transmitted through the provider's network with a priority other than 1, then the Border Device-3 must perform a change the priority change. In this example, the device changes the priority value from 1 to 6 (Figure 13b).
  • Step 5: during the transmission of the frame transmission through the provider's network, the devices are guided by will take into account the priority value in the Ethernet header (Figure 13a).
    • Step 5b5a: similarly to Step 4a (Figure 13b).
    • Step 5c5b: in  if there is no agreement about frames prioritization in accordance with the frame prioritization according to the priority value specified in the Ethernet header, a third-party service provider can apply its own QoS policy to traffic and set a priority that may not satisfy the QoS policy of the DS domain (Figure 13c).
  • Step 6: the border device in the provider's network removes the priority field from the Ethernet header and passes forwards it in the to Node-2 direction  (Figure 13a-c).
Center

Image RemovedImage Added

Figure 13a - Example of the Ethernet frame priority changing during the transmission through two network segments (the priority in segments is coordinated)Image Removedsettingis coordinated and the priority value matches for the 2 segments)

Image Added

Figure 13b - Example of the Ethernet frame priority changing during the transmission through two network segments (the priority in segments settingis coordinated, but the priority should be changed)

Image RemovedImage Added

Figure 13c - Example of the Ethernet frame priority changing during the transmission through two network segments (the priority setting in the 2 segments is not coordinated)

Queues implementation in Infinet devices

For a device, the process of analyzing the priority in the service headers and processing data in accordance with the data processing according to these priorities is not a simple task due to the following reasons:

  • Devices recognize priority automatically The devices automatically recognize priorities according to different protocols. For example, the InfiLINK XG family of devices support supports 802.1p priority and do prioritization, but does not recognize DSCP priority values.
  • Devices that are borderline for The devices at the borderline of the DS domain allow to use a different set of criteria to classify the traffic. For example, the InfiMAN 2x2 devices allow to set priority priorities by selecting all the TCP traffic directed to port 23, while the Quanta 5 family devices do does not support this type of prioritization.
  • The number of the queues number implemented in inside the devices is different differs and depends on the manufacturer. A correspondence table is used to set a relation between the priority in the service header and the device's internal queue.

The tables below show the data on types for the queues of the internal architecture, the priorities managing priority handling possibilities and the relation between the protocol standardized priorities and the internal priorities valuesused by the device.

Note Please note the architectural queuing architectural feature of the Infinet devices: all queues share a single memory buffer. In case that all the traffic falls into one a single queue, its the size of the queue will be equal to the size of the buffer, but if there will be several queues in use, the size of the memory buffer will be evenly divided between them.

Center
Table of packets internal
Internal packet queuing
Queues number
ParameterDescriptionInfiLINK 2x2 / InfiMAN 2x2InfiLINK Evolution / InfiMAN EvolutionInfiLINK XG / InfiLINK XG 1000Quanta 5 / Quanta 6 / Quanta 70
Marking criteriaA criteria set that can be used to classify the incoming traffic.

PCAP expressions support

(PCAP expressions allow flexible filtering based on any service header field, see the PCAP filters article)

PCAP expressions support

(PCAP expressions allow flexible filtering based on any service header field, see the PCAP filters article)

vlan-idvlan-id
Auto recognitionFor these protocols, the device family Protocols for which the family of devices automatically recognizes the priority set in the header and put puts the data in the appropriate queue.

RTP

802.1p

IPIP/GRE tunnels

MPLS

DSCP

ToS

ICMP

TCP Ack

PPPoE

RTP

802.1p

IPIP/GRE tunnels

MPLS

DSCP

ToS

ICMP

TCP Ack

PPPoE

802.1p802.1p
Number of queuesThe number of data queues number used in by the device.171748
Queues Queue managementSupported mechanisms for picking emptying the packets from the queues.

Weighted

Strict
Weighted

QoS configuration via WebDocumentation on about the traffic prioritization configuring through configuration using the Web interface.

QoS options

Traffic Shapping

QoS options

Traffic Shaping⁣

Configuring QoS

Switch

Configuring per-VLAN

Switch settings
QoS configuration via CLIDocumentation on about the traffic prioritization configuring through configuration using the command line interface.
qm commandqm commandCommands for switch configuration-
Correspondence
table
between the priorities of the standard protocols and the internal priorities
for
used by the InfiLINK 2x2, InfiMAN 2x2
family
, InfiLINK Evolution and InfiMAN Evolution families of devices
Traffic class (in accordance with MINT)InfiLINK 2x2, InfiMAN 2x2,  InfiLINK Evolution and InfiMAN Evolution802.1pToS (Precedence)DSCP
Background1601

Regular best effort1500000
Business 614
018, 10
Business 513

12, 14
Business 412
0216, 18
Business 311

20, 22
Business 210
0324, 26
Business 1902
28, 30
QoS 48
0432
QoS 37

34
QoS 26

36
QoS 1503
38
Video 24040540, 42
Video 13

44, 46
Voice2050648, 50
Control106
52, 54
NetCrit0070756, 58, 60, 62
Correspondence table between the priorities of the standard protocols and the internal priorities
for
used by the InfiLINK XG, InfiLINK XG 1000, Quanta 5, Quanta 6 and Quanta 70
family
families of devices
Traffic class (in accordance with 802.1p)802.1pInfiLINK XG, InfiLINK XG 1000Quanta 5, Quanta 6, Quanta 70
Background (lowest priority)0010
Best Effort011
Excellent Effort0222
Critical Applications033
Video0434
Voice055
Internetwork Control0646
Network Control (higher priority)077

...

Queue management

Prioritization assumes the use of several packes packet queues, which whose content must be transmitted to the outgoing interfaces through a common bus. Infinet devices support two mechanisms for packets packet transmission from the queues to the bus: strict and weighted scheduling.

...

The strict prioritization mechanism assumes a sequential queues emptying in accordance with emptying of the queues according to the priority values. Packets with priority 2 will only be sent after all the packets with priority 1 will be have been transferred to the bus (Figure 14). After the packets with priorities 1 and 2 are sent, the device will start sending packets with priority 3.

The lack disadvantage of this mechanism is that resources will not be allocated to low-priority traffic if there are packets in the higher priority queues, it will lead leading to the complete inaccessibility of some network services.

Center

Image RemovedImage Added

Figure 14 - Strict packets scheduling

Anchor
weighted_prio
weighted_prio
Weighted scheduling

Weighted The weighted scheduling doesn't have the disadvantages of the strict scheduling. Weighted scheduling assumes the allocation of the resources allocation between for all queues in accordance with the queues according to the weighting factors that correspond to the priority values. If there are three queues (Figure 15), weighted factors can be distributed in the following way:

  • packets packet queue 1: weight = 3;
  • packets packet queue 2: weight = 2;
  • packets packet queue 3: weight = 1.

When using the weighted scheduling, each queue will receive resources, i.e. there will be no such situation with the complete inaccessibility of some a network service.

Center

Image RemovedImage Added

Figure 15 - Weighted packets scheduling

Traffic prioritization recommendations

Universal recommendations for configuration of the configuring traffic prioritization mechanisms:

  • Pay special attention to when developing the QoS policies developing. The policy should take into account the traffic of all the services used in the network , and it should provide strict compliance between the service and the traffic class.
  • The QoS policy should take into account the devices technical capabilities of the devices for recognizing and manipulating handling the service fields field values, which indicate the data priority.
  • The rules for classifying traffic flows flow classification must be configured on the border devices of the DS domain border devices.
  • The intermediate devices of the DS domain intermediate devices should automatically recognize the traffic priorities.

Throughput limitation mechanism 

The distribution of the network resources distribution between the traffic flows can be performed not only by prioritization, but also using the throughput limitation mechanism. In this case, the bitrate of the stream bitrate cannot exceed the threshold level set by the network administrator.

...

The throughput limitation principle is to constantly measure the throughput of the data stream intensity and to apply the restrictions if the intensity this value exceeds the set threshold (Figure 16a,b). The throughput limitation in Infinet devices is performed in accordance according to the Token Bucket algorithm, where all data packets above the throughput threshold are discarded. As a result the losses , there will appear losses, as described above appears.

Center

Image RemovedImage Added

Figure 16a - Graph of u Unlimited data flow rate

Image RemovedImage Added

Figure 16b - Graph of limited Limited data flow rate

Token Bucket Algorithm

There are logical buffer for For each speed limit rule containing allowed for transfer data amountthere is a logical buffer associated, in order to serve the allowed amount of transmitted data. Usually, the buffer size is larger than the size of the limitation size. Each unit of time to such buffer is allocated a data size equal to the set threshold of for the bitrate limit.

In the example below (video 1), the speed limit is 3 data units , and the buffer size is 12 data units. The buffer is constantly replenished in accordance with filled according to the threshold, however, it cannot be filled over its own volumesize.

Center

Multimedia
nameTokenBucket_1.mp4width100%

Video 1 - Resource allocation to into a speed limit buffer

Data The data received by the device at the inbound interface will be processed only if the buffer contains has resources for their processing (video 2). Thus, the passing data empties occupies the buffer resource's resources. If the buffer is empty 's resources are fully occupied at the time of a new data frame arrival, the data frame will be discarded.

Center

Multimedia
nameTokenBucket_2.mp4width100%

Video 2 - Dedicated resources usage for data processing

Keep in mind that the resources allocating to the buffer and resource allocation and the data processing are performed simultaneously inside the buffer (video 3).

The rate of the data flows intensity in packet networks is inconsistent, it allows to demonstrate the advantages proving the efficiency of the Token Bucket algorithm.  Time The time intervals in which data are is not transmitted allow allows to accumulate resources in the buffer, and then process the amount of data amount that exceeds the threshold. A wide band will be allocated to pulse data streams, such as web traffic, in order to ensure a quick loading of the web pages loading, and to increase the comfort level of the end user comfort level.

Despite Besides the described advantage of the Token Bucket algorithm, the average throughput will fit to match with the set threshold, as in the a long period of time period, the amount of available resources amount will be determined not by the size of the buffer, but by the intensity of its filling, which is equal to the throughput threshold.

Center

Multimedia
nameTokenBucket_3.mp4width100%

Video 3 - Data processing by at the speed limit buffer

The Token Bucket algorithm can be applied to separated separate traffic flows, in . In this case, the a speed limit buffer will be allocated for each flow (video 4).

In this example, two speed limit rules are implemented: for the traffic of vlan 161 traffic - 3 data units per unit of time block, for the traffic of vlan 162 traffic - 2 data units. Размер буфера для каждого из потоков трафика равен 4 интервалам времени, т.е. 12 единиц данных для трафика vlan 161 и 8 единиц данных для трафика vlan 162. Суммарно буферам выделяется 5 единиц данных в каждый из интервалов времени, далее выделенные ресурсы распределяются между буферами. Поскольку размер буферов ограничен, то ресурсы, выделенные сверх их размеров, не могут быть использованы The buffer size for each traffic flow contains 4 time intervals, i.e. 12 data units for vlan's 161 traffic and 8 data units for vlan's 162 traffic. In total, 5 data units are allocated to the buffers in each time interval, then the allocated resources are distributed between the buffers. Since the size of the buffers is limited, the resources that exceed their size cannot be used.

Center

Multimedia
nameTokenBucket_4.mp4
width100%

Видеоролик 4 - Выделение ресурсов двум буферам ограничения скорости

...

Video 4 - Resource allocation for two speed limit buffers

Each buffer's resources can only be used for the traffic of the corresponding service (video 5). Thus, to handle vlan's 161 traffic, only the resources of the buffer for vlan's 161 traffic are used. Similarly, the other buffer's resources are used for vlan's 162 traffic.

Center

Multimedia
nameTokenBucket_5.mp4
width100%

Видеоролик 5 - Использование выделенных ресурсов для обработки данных

Существуют способы связи буферов ресурсов друг с другом. Например, в устройствах Инфинет буферы выделенных ресурсов могут быть связаны через классы (см. ниже). В случае, если один из буферов ресурсов будет заполнен (видеоролик 6), выделенные ему ресурсы могут быть предоставлены другому буферу.

...

Video 5 - Usage of the dedicated resources for data processing 

There are ways to combine the resource buffers. For example, on the Infinet devices, the allocated resource buffers can be combined using classes (see below). If one resource buffer is filled with resources (video 6), its further incoming resources can be provided to another buffer.

In the example below, the buffer for vlan 162 is full of resources, allowing to fill in the vlan's 161 buffer with 5 data units of resources, instead of 3 (its own 3 data units plus the 2 data units of the other buffer). In this case, the vlan's 161 service throughput will increase. But when vlan's 162 traffic resource buffer will have free space, the resource allocation will return to the normal mode: for vlan's 161 buffer - 3 data units, for vlan's 162 buffer - 2 data units.

Center

Multimedia
nameTokenBucket_6.mp4
width100%

Видеоролик 6 - Перераспределение выделенных ресурсов между буферами ограничения трафика различных сервисовVideo 6 - Redistribution of the allocated resources between various speed limit buffers

Anchor
shaping_infinet
shaping_infinet

...

Рассмотренный принцип ограничения пропускной способности реализован в устройствах Инфинет двумя способами:

  • Ограничение трафика физического интерфейса: ограничение будет применено к суммарному трафику всех потоков данных, проходящих через физический интерфейс. Данный метод прост в конфигурации - следует указать интерфейс и значение порога, но не позволяет применить ограничение к трафику конкретного сетевого сервиса.
  • Ограничение потока трафика: ограничение применяется к логическому потоку данных. Логический поток данных выделяется из общего трафика с помощью проверки на соответствие заданным критериям, что позволяет применять ограничения пропускной способности к трафику сетевых сервисов, выбор которых выполняется на основе значений полей служебных заголовков. Например, можно выделить в логический канал весь трафик с vlan 42 и ограничить только его пропускную способность.

Устройства Инфинет позволяют настраивать иерархические структуры распределения ресурсов пропускной способности. Для этого используются два типа объектов: логический канал и класс, которые связаны отношением "потомок-родитель" соответственно. В классе указывается пропускная способность, которая будет распределена между дочерними логическими каналами, а в канале значения гарантированной и максимальной пропускной способностей, CIR и MIR соответственно.

Рассмотрим пример передачи трафика двух сервисов, ассоциированных с идентификаторами vlan 161 и 162, между Master и Slave (рис. 17а). Суммарному трафику сервисов выделено не более 9 Мбит/с.

Конфигурация устройства Master может быть выполнена следующим образом (рис. 17б):

  • Создан Класс 16, пропускная способность которого ограничена значением 9 Мбит/с.
  • Класс 16 является родительским по отношению к каналам 161 и 162, т.е. сумма трафика этих логических каналов ограничена значением 9 Мбит/с.
  • Трафик с идентификатором vlan 161 ассоциируется с логическим каналом 161, vlan 162 - с логическим каналом 162.
  • Значения CIR для канала 161 равно 4 Мбит/с, канала 162 - 5 Мбит/с. Если оба сервиса будут активно обмениваться данными, то пороговые значения для их трафика составят CIR, установленные для каждого из каналов.
  • Значения MIR для канала 161 равно 9 Мбит/с, канала 162 - 7 Мбит/с. Если трафик в логическом канале 162 будет отсутствовать, то пороговое значение для канала 161 будет равно MIR, т.е. 9 Мбит/с. В обратном случае, пороговое значение для канала 162 будет равно 7 Мбит/с.
Center

Image Removed

Рисунок 17а - Пример ограничения пропускной способности для трафика c vlan-id = 161, 162

Image Removed

Рисунок 17б - Иерархическая структура каналов ограничения пропускной способности для трафика с vlan 161 и 162

Функциональные возможности по конфигурации ограничения пропускной способности на устройствах Инфинет всех семейств представлены в таблице ниже:

...

Таблица функциональных возможностей по ограничению пропускной способности в устройствах Инфинет

...

  • GE0
  • GE1
  • SFP
  • mgmt

...

до 200 классов, являющихся дочерними по отношению к логическим каналам

...

поддержка PCAP-выражений

(PCAP выражения позволяют выполнить гибкую фильтрацию на основе любых полей служебных заголовков, см. PCAP-фильтры)

...

Рекомендации по конфигурации ограничения пропускной способности

При конфигурации ограничения пропускной способности потоков данных следует руководствоваться следующими рекомендациями:

  • Следует выполнять ограничение для трафика всех сетевых сервисов. Эти действия позволят сохранить контроль над всеми потоками трафика и осознанно выделять ресурсы для этих потоков.
  • Ограничение пропускной способности должно выполняться на устройствах, расположенных ближе всего к источнику данных. Нет необходимости дублировать правила ограничения пропускной способности для потока данных на протяжении всей цепочки промежуточных устройств.
  • Многие сетевые сервисы являются двунаправленными, что требует применения ограничений на устройствах как к входящему, так и исходящему трафику.
  • Для корректной установки пороговых значений пропускной способности следует предварительно оценить средние и максимальные значения трафика сервисов. Особое внимание следует обратить на часы наибольшей нагрузки. Выполнить сбор данных для проведения анализа можно с использованием системы мониторинга InfiMONITOR.
  • Сумма значений CIR логических каналов, ассоциированных с одним классом, не должна быть более максимальной пропускной способности класса.

Дополнительные материалы

White papers

  1. TDMA и Polling: особенности применения в беспроводных сетях
  2. Производительность устройств Инфинет

Вебинары

  1. Настройка политик QoS в устройствах Инфинет.

Видео

  1. Настройка политик QoS в устройствах Инфинет.

...

Types of speed limits in Infinet devices

The throughput limitation principle described above is implemented in the Infinet devices in two ways:

  • Traffic shaping at the physical interface: limitations will be applied to the whole data flow passing through the physical interface. This method is easy to configure - specify the interface and the threshold value - but it does not allow to apply limitations to a specific network service traffic.
  • Traffic flow shaping: limitations are applied to the logical data flows. The logical data stream is separated from the main traffic by a specified criteria. It allows to apply throughput limitations per network services, which are separated by the values of the service header fields. For example, the traffic tagged with vlan 42 can be separated to a logical channel and limited in throughput without influencing the other traffic flows.

The Infinet devices allow to configure hierarchical throughput allocation structures. Two object types are used to perform this: a logical channel and a class, which are connected by a child-parent relationship. The class has a throughput value assigned, which is distributed between the child logical channels, and the channel has a guaranteed and a maximum throughput value - CIR and MIR.

Let's look at the example of transmitting the traffic of two services associated with vlan id's 161 and 162, between Master and Slave (Figure 17a). The total traffic of the services should not exceed 9 Mbps.

The Master's device configuration can be performed in the following way (Figure 17b):

  • Class 16 has been configured with a 9 Mbps throughput.
  • Class 16 is the parent of the channels 161 and 162, i.e. the total traffic at these logical channels is limited to 9 Mbps.
  • The traffic with vlan ID 16 is associated with the logical channel 161; the traffic of vlan 162 is associated with the logical channel 162.
  • The CIR value for channel 161 is 4 Mbps and for channel 162 it is 5 Mbps. If both services will actively exchange data, the threshold values for their traffic will be equal to the CIR of each channel.
  • The MIR value for channel 161 is 9 Mbps and for channel 162 it is 7 Mbps. If there is no traffic in logical channel 162, then the threshold value for channel 161 will be equal to the MIR, i.e. 9 Mbps. In the other case, when there is no traffic in the logical channel 161, the threshold value for channel 162 will be equal to 7 Mbps.
Center

Image Added

Figure 17a - Throughput limitation for 2 traffic flows tagged with vlan-ids 161 and 162

Image Added

Figure 17b - Hierarchical channel structure of the throughput limits for the traffic of vlans 161 and 162

The throughput limitation capabilities of all Infinet families of devices are shown in the table below:

Center
Throughput limitation capabilities in Infinet devices
ParameterDescriptionInfiLINK 2x2 / InfiMAN 2x2InfiLINK Evolution / InfiMAN EvolutionInfiLINK XG / InfiLINK XG 1000
Interface shapingThe throughput limitation capabilities of the device's physical interface.--
  • GE0
  • GE1
  • SFP
  • mgmt
Logical stream shapingThe throughput limitation capability for a traffic stream, filtered according to one or more criteria.up to 200 logical channelsup to 200 logical channels-
Traffic directionsAbility to apply limitations to the incoming/outgoing traffic flows.incoming and outgoingincoming and outgoingoutgoing
Limitations hierarchy
The ability to create a system of hierarchical limitations.

up to 200 logical channels, which are the children of the logical classes

up to 200 logical channels, which are the children of the logical classes-
Logical stream filteringCriteria used to filter the data streams.

PCAP expressions support

(PCAP expressions allow to perform a flexible limitation based on any service header field, see the PCAP filters article)

PCAP expressions support

(PCAP expressions allow to perform a flexible limitation based on any service header field, see the PCAP filters article)

-
Traffic shaping in WebDocumentation about throughput limitation settings in the Web interface.
Traffic shapingTraffic Shaping⁣Switch
Traffic shaping in CLIDocumentation about throughput limitation settings via CLI.
qm commandqm commandCommands for switch configuration

Recommendations for the throughput limitation configuration 

Use the following recommendations during the data throughput limitation configuration:

  • The traffic of all network services should be limited. It allows to take control over all traffic flows and separately allocate resources for these flows.
  • The throughput limitation should be performed on the devices closest to the data source. There is no need to duplicate throughput limiting rules for the data flows throughout the chain of intermediate devices.
  • Many network services are bidirectional, so they require restrictions on devices for both the incoming and the outgoing traffic.
  • To set the correct throughput threshold values, evaluate first the average and the maximum values of the service traffic. Pay special attention to the busy hours. Collecting data for analysis is possible via the InfiMONITOR monitoring system.
  • The sum of the CIR values of the logical channels associated with one class should not exceed the maximum class throughput.

Additional materials

White papers

  1. TDMA and Polling: Application features.
  2. Performance of the Infinet Wireless devices.

Webinars

  1. QoS policies configuration in Infinet Wireless devices.

Videos

  1. Quality of Service With Infinet Wireless Units.

Others

  1. RFC 4594.
  2. RFC 791.
  3. RFC 1349.
  4. RFC 2474.
  5. Система мониторинга InfiMONITOR.
  6. Веб-интерфейс устройств семейств InfiMONITOR monitoring system.
  7. InfiLINK 2x2, InfiMAN 2x2 family devices web interface. QoS options.
  8. InfiLINK 2x2, InfiMAN 2x2 . Параметры QoS.
  9. Веб-интерфейс устройств семейств InfiLINK 2x2, InfiMAN 2x2. Контроль трафика.
  10. Веб-интерфейс устройств семейств family devices web interface. Traffic shaping.
  11. InfiLINK Evolution, InfiMAN Evolution family devices web interface. QoS options.
  12. InfiLINK Evolution, InfiMAN Evolution family devices web interface. Traffic shaping.
  13. InfiLINK XG, InfiLINK XG 1000. Настройка 1000 family devices web interface. Configuring QoS.
  14. Веб-интерфейс устройств семейства Vector 5. Настройка коммутации.
  15. Веб-интерфейс устройств семейства Vector 70. Настройка коммутации.
  16. Настройка QoS manager в ОС Quanta 5, Quanta 6 family devices web interface. Switch settings.
  17. Quanta 70 family devices web interface. Switch settings.
  18. QoS configuration in OS WANFleX.