Re: simple tbf rate clamping issues

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So I did apply the tbf on the eth1 interface instead of the VLAN
interface, and I saw the same results.  Some rate limiting was definitely
occuring, but not down to the rate (220kbit) I was expecting.  It was
still much higher (~1 Mbytes/s) with the unclamped rate being about 16
Mbytes/s.

Has everyone else otherwise pretty much always obtained transfer rates to
be clamped down to what they expected with the tbf?

thanks.

>
>> My first guess would be vlans being a problem. I know at least for
>> class based queuing disciplines on vlans, you have to take care to
>> define filters that funnel traffic through a class by selecting
>> 802.1q traffic on the real interface, not the vlan interface.
>
> Wow, why would that be though?  If the VLAN is simply presented as an
> interface, and the queuing disciplines work on an interface basis, what is
> it that breaks it?
>
>> I know traffic shaping does work on vlans with the class based queues
>> because I use it every day. But all my tc statements are applied on a
>> real physical interface and not the vlan interface; I could never get
>> tc to work on vlan interfaces directly.
>
> For what's it worth, I've been applying netem queuing disciplines to many
> different VLAN interfaces and have been getting exactly the expected
> results (the packet loss % is right on, etc).  Could you think of anything
> different with a tbf that fails?
>
>> Just a guess, but I bet you'd get the rate limiting you expect on
>> your vlan by applying the tbf rate limit on interface eth1 instead of
>> the vlan interface. If so, and if your goal is to rate limit by vlan,
>> then you will likely need to go with a class based queueing
>> discipline like htb and then define traffic filters to limit each
>> vlan to the rate you wish.
>
> Yes the goal is to limit by VLAN.  I will try what you suggested, i.e.
> limit the traffic on the physical interface instead and I'll report back.
> But I hope that won't be the solution! :)
>
>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>> ----------------------------------------------------------------------
>>>
>>> Message: 1
>>> Date: Tue, 21 Aug 2007 23:32:18 -0700
>>> From: sting <sting@xxxxxxxxxxxxx>
>>> Subject:  simple tbf rate clamping issues
>>> To: LARTC@xxxxxxxxxxxxxxx
>>> Message-ID: <46CBD872.6060307@xxxxxxxxxxxxx>
>>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>>
>>> Hello,
>>>
>>> I was attempting to throttle egress traffic to a specific rate using a
>>> tbf.  As  a starting point I used an example from the LARTC howto,
>>> which
>>> goes:
>>>
>>> tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540
>>>
>>> I then attempt a large fetch from another machine via wget (~40 megs)
>>> and the rate was clamped down to about 12Kbytes/s.  As this seemed too
>>> much, I gradually increased the latency up to 200ms which then gave me
>>> the expected results (~34Kbytes/s).
>>>
>>> I then applied this queuing discipline on a machine acting as a
>>> gateway/router for a few VLANed subnets.  The tbf was applied on
>>> interface eth1.615.  From another workstation I attempted a wget,
>>> and so
>>> the traffic had to go through the gateway/router.  The download rate
>>> went from 16 Mbytes/s down to about 1.6 Mbytes/s, but was much much
>>> higher than what I'm trying to clamp it down to.
>>>
>>> Two questions:
>>> 1/ My main question. AFAIK, queuing disciplines affect egress traffic
>>> whether that traffic originates from the host or is being forwarded.
>>> Assuming that the fact the tbf is mostly meant to be applied to
>>> forwarded traffic is not an issue, *is there anything else that could
>>> cause the transfer rate not to be correctly clamped down?*  What
>>> parameters should I be playing with?
>>>
>>> 2/ I'm assuming the first example I quoted must have worked as
>>> described
>>> when the HOWTO was initially written a few years ago.  In any case,
>>> i am
>>> assuming with 50ms max latency outgoing packets could not be held long
>>> enough in the tbf and had to be droppd, correct?
>>>
>>> Thank you,
>>> sting
>>>
>>
>>
>
>
> _______________________________________________
> LARTC mailing list
> LARTC@xxxxxxxxxxxxxxx
> http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
>


_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux