The evolution of the data networks entails an increase in the volume of the transmitted traffic, which requires the usage of a quality of service policy. The implementation of the policy will allow the classification of the network traffic and the distribution of the network resources between different traffic classes.
In packet networks, the traffic is transmitted from the sending node to the receiving node through communication channels and intermediate devices. Generally a data packet is processed by each intermediate device independently. Let's look at an example of data packet processing performed by an intermediate network device (Figure 1):
Note that in modern network devices, the network interfaces are usually combined and can operate both as incoming and outgoing.
Figure 1 - Traffic passing through an intermediate network device |
A network device can be intermediate for several pairs of nodes and each node can transmit the data of several services (Figure 2a). Let's look at a scheme where the "Network device" is an intermediate node for the traffic coming from the following pairs of nodes: Node-1 - Node-4, Node-2 - Node-5 and Node-3 - Node-6. The first pair transmits data for three services, the second for two and the third for one service. If there are no QoS settings, the data of all services get through the general queue in the order they are received at the "Network device" and in the same order they will be transferred from the queue to the outgoing interfaces.
With QoS configured, each of the incoming traffic flows can be classified by its type (for example) and a separate queue can be mapped to each class (Figure 2b). Each packet queue can be assigned a priority, which will be taken into account while extracting the packets from the queues, and will guarantee specific quality indicators. The traffic flow classification can be performed not only with respect to the services used, but according to other criteria also. For example, each pair of nodes can be assigned to a separate packet queue (Figure 2c).
Figure 2a - Queuing for various services without QoS Figure 2b - Queuing for various services with QoS Figure 2b - Queuing for various users with QoS |
Keep in mind that several intermediate network devices can be located on the data path between the source and the receiver, having independent packet queues, i.e. an effective QoS policy implementation will require the configuration of several network nodes.
The main conclusions from the previous section, which will be used to define the quality metrics, are the following:
There are three main quality metrics:
Let's look at each metric using an example: Node-2 transmits three data packets to Node-5; the data source and the recipient are connected to an intermediate Network device and the packets are part of the same service, i.e. their key service fields are the same.
During a data stream transmission, some packets may not be received, or may be received with errors. This process is called data loss and it is defined as the ratio between the number of received packets and the number of transmitted packets. In the example below (Figure 3), Node-2 transmits packets with the identifiers 1, 2 and 3, however, Node-5 receives only packets 1 and 3, i.e. the packet with the identifier 2 was lost. There are network mechanisms which allow the retransmission of the lost data. Examples of such mechanisms are the TCP and the ARQ protocols.
The causes of data loss can be divided into the following groups:
Figure 3 - Data packet loss example |
The losses affect two indicators: throughput and packet performance.
One of the main indicator that is practically used is the throughput, whose value depends on the losses. The throughput is defined by the capabilities of the physical channel and by the ability of the intermediate network devices to process the data stream. The link throughput is defined as the maximum amount of data that can be transmitted from the source to the receiver per unit of time.
The parameter that affects the throughput and the state of the queues is the packet performance of the device. Packet performance is the maximum number of data packets of a given length that a device is capable to process per unit of time.
The real throughput depends on both packet performance and on the interface's characteristics, therefore, at the network design stage, pay attention to the coherence of these parameters in order to avoid the situation when one of them becomes a bottleneck for a link or for a network segment.
The packet performance is defined by the hardware capabilities of the central processor and by the amount of internal memory. Network devices process multiple traffic streams with different L2 frame sizes, so the following Ethernet frame size values are used for a performance test:
Due to the limited amount of internal memory, better packet performance is achieved for the minimum frame size. Using minimum sized frames assumes a large amount of overhead since each data frame has a service header, whose size does not depend on the size of the frame itself.
For example, the service header length for 64 bytes long frames (Figure 4b) and 156 bytes frames(Figure 4c) will be the same, but the user data amount will be different. To transmit 138 bytes of user data, three 64 bytes frames or one 156 bytes frame will be required, so in the first case 192 bytes are sent, in the second only 156 bytes. For a link having a fixed throughput, large frames will increase the efficiency by rising the useful throughput of the system, but the latency will also increase. The performance of the Infinet devices in various conditions is shown in the "Performance of the Infinet Wireless devices" document.
Figure 4 - Frame structure for various Ethernet frame lengths |
Delay is defined as the time it takes for a packet to travel from the source to the destination. The value of the delay depends on the following aspects:
The delay is often measured, as a round-trip time (RTT), i.e. the time it takes for the data packet to be transmitted from the source to the destination and backward. For example, this value can be seen in the ping command's results. The time it takes for the intermediate network devices to process the data packets forward and backward may differ, therefore, usually the round-trip time is not equal to the double of the one-way delay.
Figure 5 - Example of data transfer delay |
The CPU load and the status of the packet queues are frequently changing at the intermediate network devices, so the delay during the data packet transmission will vary. In the example below (Figure 6), the transmission time for the packets with the identifiers 1 and 2 is different. The difference between the maximum and the average delay values is called jitter.
Figure 6 - Example of varying delay in data transfer |
When using a redundant network infrastructure the data between the source and the receiver can be transmitted through different paths, so jitter will occur. Sometimes the difference between the delays on each path may become so large that the order of the transmitted data packets will change on the receiving side (Figure 7). In the example below, the packets were received in a different order.
The effect depends on the characteristics of the service and on the ability of the higher layer network protocols to restore the original sequence. Usually, if the traffic of different services is transmitted through different paths, then it should not affect the ordering of the received data.
Figure 7 - Example of unordered data delivery |
Each of the data transfer services has a set of requirements for the quality indicators. The RFC 4594 document includes the following service types:
|
|
The transmission of the various services is performed on a single network infrastructure, which has limited resources, therefore, mechanisms should be provided for distributing the resources between the services.
Let's look at the example below (Figure 8). Node-2 generates traffic serving different services with a total speed of 1 Gbit/s. Medium-2 allows to transfer this data stream to an intermediate network device, however, the maximum link throughput between the Network device and Node-5 is 500 Mbps. Obviously, the data stream cannot be processed completely and part of this stream must be dropped. The QoS task is to make these drops manageable in order to provide the required metric values for the end services. Of course, it is impossible to provide the required performance for all the services, as the throughput does not match, therefore, the QoS policy implementation involves that the traffic of the the critical services should be processed first.
Figure 8 - Example of inconsistency between the incoming traffic amount and the link throughput |
Two main methods used during the QoS policy implementation can be highlighted:
Let's look at the example above, and add a second intermediate device to the data distribution scheme (Figure 9a). The packet distribution scheme follows the next steps:
Each intermediate network device without traffic prioritization settings will increase the data transmission delay, so the value of the delay is unpredictable. Thus, having a large number of intermediate devices without QoS policies implemented, will make the real-time services's operation impossible because of the mismatch with the quality indicators, i.e. traffic prioritization must be performed along the entire network traffic transmission path (Figure 9b).
Keep in mind that implementing QoS policies is the only method to ensure the quality metrics. For an optimal effect, the QoS configuration should be synchronized with other settings. For example, using the TDMA technology instead of Polling on the InfiLINK 2x2 and InfiMAN 2x2 families of devices reduces jitter by stabilizing the value of the delay (see TDMA and Polling: Application features).
Figure 9a - Example of data distribution with partly implemented QoS policies Figure 9b - Example of data distribution with implemented QoS policies |
From the management point of view, the transmission path through the network can be described in two ways (Figure 10a, b):
Figure 10a - White-box structure example Figure 10b - Black-box structure example |
To solve the described problem of the black-box network structure, the packet headers can be labeled: the priority required during packet processing is set in a header field and is kept over the whole path. In this case, all intermediate devices can put the incoming data in a queue according to the field values in which the priority is indicated. This requires the development of standard protocols and the implementation of these protocols by the equipment manufacturers.
Keep in mind that usually, the equipment located in an external responsibility zone does not support data prioritization in accordance with the priority values in the service headers. Traffic priority coordination should be performed at the border of the responsibility zones, at the administrative level, by implementing additional network configuration settings.
The processing priority of a packet can be set using the service fields of various network protocols. This article describes the use of the Ethernet and of the IPv4 protocol headers.
The Ethernet frame header includes the "User Priority" service field, which is used to prioritize the data frames. The field has a size of 3 bits, which allows to select 8 traffic classes: 0 - the lowest priority class, 7 - the highest priority class. Keep in mind that the "User Priority" field is present only in 802.1q frames, i.e. frames using VLAN tagging.
Figure 11 - Frame prioritization service field in the Ethernet header |
The IP protocol has three historical stages in the development of the service field responsible for packet prioritization:
Thus, ToS allows to distinguish 8 traffic classes: 0 - the lowest priority, 7 - the highest priority, and DSCP - 64 classes: 0 - the lowest priority, 63 - the highest priority.
Figure 12a - ToS service field in the IP packet header Figure 12b - DSCP service field in the IP packet header |
Many end nodes in the network do not support the handling of the service headers: can not set or remove the priority, so this functionality should be implemented on the corresponding intermediate network devices.
Let's look at the example of a data transmission from Node-1 to Node-2 through a DS-domain and through a third-party telecom operator's network (Figures 13a-c). The DS domain includes three devices, two of them are located at the borderline and one is an intermediate device. Let's look at the steps taken for data processing in a network using an Ethernet frame transmission (the basic principles discussed in the example below are applicable for an IP packet or other protocol that supports data prioritization):
Figure 13a - Example of Ethernet frame priority changing during the transmission through two network segments (the priority setting is coordinated and the priority value matches for the 2 segments) Figure 13b - Example of Ethernet frame priority changing during the transmission through two network segments (the priority setting is coordinated, but the priority should be changed) Figure 13c - Example of Ethernet frame priority changing during the transmission through two network segments (the priority setting in the 2 segments is not coordinated) |
For a device, the process of analyzing the priority in the service headers and the data processing according to these priorities is not a simple task due to the following reasons:
The tables below show the data types for the queues of the internal architecture, the priority handling possibilities and the relation between the standardized priorities and the internal priorities used by the device.
Please note the architectural queuing feature of the Infinet devices: all queues share a single memory buffer. In case that all the traffic falls into a single queue, the size of the queue will be equal to the size of the buffer, but if there will be several queues in use, the size of the memory buffer will be evenly divided between them.
Internal packet queuing
Correspondence between the priorities of the standard protocols and the internal priorities used by the InfiLINK 2x2, InfiMAN 2x2, InfiLINK Evolution and InfiMAN Evolution families of devices
Correspondence table between the priorities of the standard protocols and the internal priorities used by the InfiLINK XG, InfiLINK XG 1000, Quanta 5, Quanta 6 and Quanta 70 families of devices
|
Prioritization assumes the use of several packet queues, whose content must be transmitted to the outgoing interfaces through a common bus. Infinet devices support two mechanisms for packet transmission from the queues to the bus: strict and weighted scheduling.
The strict prioritization mechanism assumes a sequential emptying of the queues according to the priority values. Packets with priority 2 will only be sent after all the packets with priority 1 have been transferred to the bus (Figure 14). After the packets with priorities 1 and 2 are sent, the device will start sending packets with priority 3.
The disadvantage of this mechanism is that resources will not be allocated to low-priority traffic if there are packets in the higher priority queues, leading to the complete inaccessibility of some network services.
Figure 14 - Strict scheduling |
The weighted scheduling doesn't have the disadvantages of the strict scheduling. Weighted scheduling assumes the allocation of the resources for all the queues according to the weighting factors that correspond to the priority values. If there are three queues (Figure 15), weighted factors can be distributed in the following way:
When using the weighted scheduling, each queue will receive resources, i.e. there will be no such situation with complete inaccessibility of a network service.
Figure 15 - Weighted scheduling |
Universal recommendations for configuring traffic prioritization mechanisms:
The distribution of the network resources between the traffic flows can be performed not only by prioritization, but also using the throughput limitation mechanism. In this case, the bitrate of the stream cannot exceed the threshold level set by the network administrator.
The throughput limitation principle is to constantly measure the throughput of the data stream and to apply restrictions if the this value exceeds the set threshold (Figure 16a,b). The throughput limitation in Infinet devices is performed according to the Token Bucket algorithm, where all data packets above the throughput threshold are discarded. As a result, there will appear losses, as described above.
Figure 16a - Unlimited data flow rate Figure 16b - Limited data flow rate |
For each speed limit rule there is a logical buffer associated, in order to serve the allowed amount of transmitted data. Usually, the buffer size is larger than the size of the limitation. Each unit of time is allocated a data size equal to the threshold for the bitrate limit.
In the example below (video 1), the speed limit is 3 data units and the buffer size is 12 data units. The buffer is constantly filled according to the threshold, however, it cannot be filled over its own size.
Video 1 - Resource allocation into a speed limit buffer |
The data received by the device at the inbound interface will be processed only if the buffer has resources for their processing (video 2). Thus, the passing data occupies the buffer's resources. If the buffer's resources are fully occupied at the time of a new data frame arrival, the frame will be discarded.
Video 2 - Dedicated resources usage for data processing |
Keep in mind that the resource allocation and the data processing are performed simultaneously inside the buffer (video 3).
The rate of the data flows in packet networks is inconsistent, proving the efficiency of the Token Bucket algorithm. The time intervals in which data is not transmitted allows to accumulate resources in the buffer, and then process the amount of data that exceeds the threshold. A wide band will be allocated to pulse data streams, such as web traffic, in order to ensure a quick loading of the web pages and to increase the comfort level of the end user.
Besides the described advantage of the Token Bucket algorithm, the average throughput will match with the set threshold, as in a long period of time, the amount of available resources will be determined not by the size of the buffer, but by the intensity of its filling, which is equal to the throughput threshold.
Video 3 - Data processing at the speed limit buffer |
The Token Bucket algorithm can be applied to separate traffic flows. In this case, a speed limit buffer will be allocated for each flow (video 4).
In this example, two speed limit rules are implemented: for the traffic of vlan 161 - 3 data units per unit of time, for the traffic of vlan 162 - 2 data units. The buffer size for each traffic flow contains 4 time intervals, i.e. 12 data units for vlan's 161 traffic and 8 data units for vlan's 162 traffic. In total, 5 data units are allocated to the buffers in each time interval, then the allocated resources are distributed between the buffers. Since the size of the buffers is limited, the resources that exceed their size cannot be used.
Video 4 - Resource allocation for two speed limit buffers |
Each buffer's resources can only be used for the traffic of the corresponding service (video 5). Thus, to handle vlan's 161 traffic, only the resources of the buffer for vlan's 161 traffic are used. Similarly, the other buffer's resources are used for vlan's 162 traffic.
Video 5 - Usage of the dedicated resources for data processing |
There are ways to combine the resource buffers. For example, on the Infinet devices, the allocated resource buffers can be combined using classes (see below). If one resource buffer is filled with resources (video 6), its further incoming resources can be provided to another buffer.
In the example below, the buffer for vlan 162 is full of resources, allowing to fill in the vlan's 161 buffer with 5 data units of resources, instead of 3 (its own 3 data units plus the 2 data units of the other buffer). In this case, the vlan's 161 service throughput will increase. But when vlan's 162 traffic resource buffer will have free space, the resource allocation will return to the normal mode: for vlan's 161 buffer - 3 data units, for vlan's 162 buffer - 2 data units.
Video 6 - Redistribution of the allocated resources between various speed limit buffers |
The throughput limitation principle described above is implemented in the Infinet devices in two ways:
The Infinet devices allow to configure hierarchical throughput allocation structures. Two object types are used to perform this: a logical channel and a class, which are connected by a child-parent relationship. The class has a throughput value assigned, which is distributed between the child logical channels, and the channel has a guaranteed and a maximum throughput value - CIR and MIR.
Let's look at the example of transmitting the traffic of two services associated with vlan id's 161 and 162, between Master and Slave (Figure 17a). The total traffic of the services should not exceed 9 Mbps.
The Master's device configuration can be performed in the following way (Figure 17b):
Figure 17a - Throughput limitation for 2 traffic flows tagged with vlan-ids 161 and 162 Figure 17b - Hierarchical channel structure of the throughput limits for the traffic of vlans 161 and 162 |
The throughput limitation capabilities of all Infinet families of devices are shown in the table below:
Throughput limitation capabilities in Infinet devices
|
Use the following recommendations during the data throughput limitation configuration: