Quick and dirty QOS Setup

6/16/05
Eric Low

The Advanced Routing & Traffic Control HOWTO, while possibly the definitive guide for this, is way too confusing and has way too much information. If you just want to set up something simple, to rate limit certain types of traffic while providing fairness among users/protocols, you really don't need much.

Everything depends on a program called tc, so make sure you have it. Then, in the kernel, under Device Drivers\Networking support\Networking options\QoS and/or fair queueing\, ensure that you have the following options selected:

QoS and/or fair queueing
CBQ packet scheduler
HTB packet scheduler
SFQ queue
Packet classifier API
TC index classifier
Firewall based classifier
U32 classifier
User nfmark as a key in U32 classifier

Here's the explanation from the lartc howto of sfq and htb:

Stochastic Fairness Queueing (SFQ) is a simple implementation of the fair queueing algorithms family. It's less accurate than others, but it also requires less calculations while being almost perfectly fair.

The key word in SFQ is conversation (or flow), which mostly corresponds to a TCP session or a UDP stream. Traffic is divided into a pretty large number of FIFO queues, one for each conversation. Traffic is then sent in a round robin fashion, giving each session the chance to send data in turn.

This leads to very fair behaviour and disallows any single conversation from drowning out the rest. SFQ is called 'Stochastic' because it doesn't really allocate a queue for each session, it has an algorithm which divides traffic over a limited number of queues using a hashing algorithm.

Because of the hash, multiple sessions might end up in the same bucket, which would halve each session's chance of sending a packet, thus halving the effective speed available. To prevent this situation from becoming noticeable, SFQ changes its hashing algorithm quite often so that any two colliding sessions will only do so for a small number of seconds.

It is important to note that SFQ is only useful in case your actual outgoing interface is really full! If it isn't then there will be no queue on your linux machine and hence no effect. Later on we will describe how to combine SFQ with other qdiscs to get a best-of-both worlds situation.

Specifically, setting SFQ on the ethernet interface heading to your cable modem or DSL router is pointless without further shaping!

 

HTB is meant as a more understandable, intuitive and faster replacement for the CBQ qdisc in Linux. Both CBQ and HTB help you to control the use of the outbound bandwidth on a given link. Both allow you to use one physical link to simulate several slower links and to send different kinds of traffic on different simulated links. In both cases, you have to specify how to divide the physical link into simulated links and how to decide which simulated link to use for a given packet to be sent.

HTB ensures that the amount of service provided to each class is at least the minimum of the amount it requests and the amount assigned to it. When a class requests less than the amount assigned, the remaining (excess) bandwidth is distributed to other classes which request service.

 

 

Now, for a given interface, you're going to start off with a root qdisc and everything's going to branch out into a tree. You're going to add classes, and either more classes, more qdiscs, or filters underneath those. The root is the parent to the class/qdisc below it, and that's the parent to whatever is under that, etc. Stick to the classful qdiscs, as those are the ones that can contain more classes.

Let's look at this one interface at a time, or it's gonna get confusing.

Every object has a handle consisting of a major:minor number. Classes have the same major number of their parents, while qdiscs have their parents' minor number as their major number. No two objects will have the same major:minor number. Your root qdisc will be 1:0. From the Linux Advanced Routing and Traffic Control HOWTO, chapter 9.5.2.1, a typical tree might look like the following:

Recapping, a typical hierarchy might look like this:

1: root qdisc
|
1:1 child class
/ | \
/ | \
/ | \
/ | \
1:10 1:11 1:12 child classes
| | |
| 11: | leaf class
| |
10: 12: qdisc
/ \ / \
10:1 10:2 12:1 12:2 leaf classes

When you number classes, keep in mind that all classes at the same level in the heirarchy are checked in order of their minor number. :1 would be tried, then, let's say, :5 would be checked if needed, then :10, etc. Numbers don't have to be consecutive.

 

Now, htb (Heirarchal token bucket) is going to do your rate limiting. Start by making your root qdisc an htb discipline. Stick one class under it, also using the htb discipline, with a maximum rate set to be just under the real maximum rate of your modem/router/connection/whatever. DO NOT set this equal to your connection's real maximum rate - if your connection becomes the bottleneck, your queueing will get all fucked up. The bottleneck must be here, in the class at the the top of the tree. Otherwise we are not the one controlling the traffic flow - our connection is. No good.

The top of my tree looks like this:

tc qdisc add dev eth0 root handle 1:0 htb default 1
tc class add dev eth0 parent 1:0 classid 1:1 htb rate 1400kbit
tc class add dev eth0 parent 1:1 classid 1:11 htb rate 256kbit ceil 1080kbit prio 10
tc class add dev eth0 parent 1:1 classid 1:12 htb rate 256kbit ceil 1080kbit prio 5
tc class add dev eth0 parent 1:1 classid 1:13 htb rate 64kbit ceil 888kbit prio 10
tc class add dev eth0 parent 1:1 classid 1:14 htb rate 64kbit ceil 888kbit prio 10

We have a T1, which of course has a speed of 1414kb. So, I bottlenecked it right there at the top with a speed of 1400kb. I then subdivided that class with more classes, each having a minimum guaranteed bitrate (256kb, 256kb, and 64kb) and a maximum ceiling bitrate (1080kb, 1080kb, and 888kb). The priority part, well, it obviously gives the lowest priority first and then goes by the lower node number for identical priorities.

*** Notice the default 1 option on the root qdisc. This means that traffic is routed to minor number 1 in that major (1:1). If I had specified default 7, traffic would be routed to 1:7 by default.

Now, I subdivide three of those classes with SFQ so that traffic is treated equally among different connections:

tc qdisc add dev eth0 parent 1:11 handle 11:0 sfq perturb 10
tc qdisc add dev eth0 parent 1:13 handle 13:0 sfq perturb 10
tc qdisc add dev eth0 parent 1:14 handle 14:0 sfq perturb 10

The third class, however, I branched out further:

tc class add dev eth0 parent 1:12 classid 1:131 htb rate 128kbit ceil 1080kbit prio 2
tc class add dev eth0 parent 1:12 classid 1:139 htb rate 32kbit ceil 128kbit prio 10
tc filter add dev eth0 protocol ip parent 1:12 prio 2 u32 match ip dport 443 0xffff flowid 1:131
tc filter add dev eth0 protocol ip parent 1:12 prio 3 u32 match ip dport 80 0xffff flowid 1:131
tc filter add dev eth0 protocol ip parent 1:12 prio 10 u32 match ip dport 25 0xffff flowid 1:139

 

 

 

Check out the tcng project for an easier way to configure QoS (takes out some of the confusion, ultimately uses the same tools).

 

*** For a really good practical example, check out chapter 15:10 in the lartc howto.