Re: collectively rate limiting multiple classes

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you Martin and Adam.   I appreciate the responses.


On Thu, Sep 6, 2018 at 2:14 AM, Adam Nieścierowicz
<adam.niescierowicz@xxxxxxxxxx> wrote:
> You can use classfull qdisc like:
>
> level1: qdisc 1:0 eth0
>
> leve1: class 1:1 1Gbps
>
> level1: class 1:2 parrent 1:1 speed 100Mbps
>
> level2: qdisc 2:0 on class 1:2
>
> level2: class 2:1 speed 100Mbps
>
> level2: class 2:2 parrent 2:1 speed 70Mpb fwmark 0x7
>
> level2: class 2:3 parrent 2:1 speed 80Mpb fwmark 0x8
>
> filter level1: attache to qdisc 1:0 redirect traffic to class 1:2 base
> on ip src/dst address
> filter leve2: attache to qdisc 2:0 redirect traffic to class 2:x base on
> fwmark dont use ip src/dst filter here
>

I like this approach.  Since I don't need to match on src/dst, I
simplified like so:

tc qdisc add dev eth0 root handle 1: htb default 1
tc class add dev eth0 parent 1: classid 1:1 htb rate 100mbit
tc qdisc add dev eth0 parent 1:1 handle 2: htb default 1
tc class add dev eth0 parent 2: classid 2:1 htb rate 100mbit
tc class add dev eth0 parent 2:1 classid 2:7 htb rate 70mbit
tc class add dev eth0 parent 2:1 classid 2:8 htb rate 80mbit
tc filter add dev eth0 parent 2: protocol ip pref 1 handle 0x7 fw
classid 2:7
tc filter add dev eth0 parent 2: protocol ip pref 1 handle 0x8 fw classid 2:8

Please let me know if you see a problem with this.

>
>
> W dniu 06.09.2018 o 05:35, Martin A. Brown pisze:
>> Hello there,
>>
>>>  I'm having a hard time determining the best approach for
>>> collectively rate limiting multiple classes of traffic.
>>>
>>>  I think the best way to describe what I'm trying to accomplish is by
>>> example.  Let's say that you do not want to allow more than 100Mb/s
>>> out of an interface.   But you also want to impose additional limits
>>> on certain types of traffic.  So for example, you want traffic with
>>> fwmark 0x7 limited to at most 70Mb/s and traffic with fwmark 0x8
>>> limited to at most 80Mb/s.  What kind of setup would be recommended
>>> for this type of scenario?
>> Others on this list who have more current experience with the
>> traffic control tooling may have a better answer for a recommended
>> setup, however, I will offer my answer below, to your question about
>> HTB and setting up nested classes.
>>
>>>  One idea was having a root HTB class for the interface with a rate
>>> and ceil of 100Mb/s.   Then have subclasses with rate 0 and ceils of
>>> 80Mb/s and 70Mb/s.  However, I'm not allowed to set a zero rate.
>>>  I don't want to guarantee any amount of bandwidth for a particular
>>> class, I only want to impose limits.  I get the feeling that I should
>>> use an entirely different approach.   Does anyone have a suggestion?
>> Why not use 1Mb/s rate for each of the leaf classes?  Or, actually,
>> anything non-zero.
>>
>> I think there's a conceptual piece you are missing with HTB.  When
>> you set the rate, you are not actually reserving anything at all.
>> You are simply setting the rate at which the borrowing / sharing
>> mechanisms kick into play.
>>
>> What you want to avoid is a case where the sum of the rates of the
>> leaf(most) classes exceed the ceiling of any of the parent classes.
>> Last I knew, HTB had no detection of this, so you could essentially
>> write a configuration that would allow you to send more in the
>> leaf(most) classes than a ceil in a parent class.

This is exactly what I am worried about.   That is why in my example
the sum of leaves is greater than the parent's ceil.

>>
>> So, probably pick some very low bitrate for your fwmark 0x8 and
>> fwmark 0x7 classes and then set the ceil to be the maximum for each
>> class.

My current implementation actually does this.   I use a rate of 1bps.
It just felt wrong because I had to assign some value for rate.  Also,
I had negative token statistics that never increased.  I suppose if I
increased the rate to something more reasonable, like the 1Mb/s rate
that you suggested, I would  avoid the negative token "problem".  But
with larger rates, it's more likely the sum of the leaves will be
greater than the parent's ceil.  I do not have a single specific use
case in mind - I am creating something that needs to work with
arbitrary configurations.

>>
>> Good luck,
>>
>> -Martin
>>
>
> --
> ---
> Pozdrawiam
> Adam Nieścierowicz
>
>

There is another aspect that I regretfully failed to mention
initially: I am also assigning different prios to classes.  My
intention is that certain fwmarks should always get to "cut in line".

Here is configuration that I tested with:

tc qdisc add dev br-vlan10 root handle 1: htb default 1
tc class add dev br-vlan10 parent 1: classid 1:1 htb rate 131072bps
tc qdisc add dev br-vlan10 parent 1:1 handle 2: htb default 11
tc class add dev br-vlan10 parent 2: classid 2:1 htb rate 131072bps
tc class add dev br-vlan10 parent 2:1 classid 2:11 htb rate 131072bps
prio 1
tc class add dev br-vlan10 parent 2:1 classid 2:10 htb rate 131072bps
prio 0
tc class add dev br-vlan10 parent 2:1 classid 2:12 htb rate 131072bps
prio 2
tc class add dev br-vlan10 parent 2:1 classid 2:3 htb rate 1310720bps
prio 1
tc filter add dev br-vlan10 parent 2: protocol ip pref 1 handle 0x3 fw
classid 2:3
tc filter add dev br-vlan10 parent 2: protocol ip pref 1 handle 0x5 fw
classid 2:10

The larger rate for fwmark 0x3 is there to confirm that level 1 qdisc
will provide the overall throttle -- which it does.

When I run two concurrent 20Mb/s udp iperfs, one with fwmark 0x3 and
one with fwmark 0x5, occasionally a clump of packets from the 0x3 flow
make it through!

I think the 0x3 packets have been sitting in the buffer for a long
time, and collect.  Once the buffer gets full enough of 0x3 packets,
there is a moment in time where no 0x5 packets are queued to
transmit(due to tail drop), and therefore a clump of 0x3 get out.
Does this sound right?

Is there a way to head drop the 0x3 packets from the buffer to allow
for more space for the 0x5 packets?  Ideally, during the iperf test I
just described, there would only be short periods of time where 0x3
packets sit in the buffer, and they would never make it onto the wire
after the 0x5 packets start coming.

Thanks,

Andy

-- 
http://www.uplevelsystems.com <http://www.uplevelsystems.com>







[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux