Re: simple tbf rate clamping issues

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2007-08-22 at 14:01 -0400, sting wrote:
> > My first guess would be vlans being a problem. I know at least for
> > class based queuing disciplines on vlans, you have to take care to
> > define filters that funnel traffic through a class by selecting
> > 802.1q traffic on the real interface, not the vlan interface.
> 
> Wow, why would that be though?  If the VLAN is simply presented as an
> interface, and the queuing disciplines work on an interface basis, what is
> it that breaks it?
> 

It can depend on where tc hooks into the network stack, where vlan
headers get messed with, hooked in, etc. I'm no hacker here, but I
suspect that it can depend on whether your network card is handling some
of the vlan tagging work or if it's being handled by the OS somewhere. I
have noticed different behavior with different network cards. 


#on one server i use...
/sbin/tc filter add dev eth1 protocol ip prio 2 parent 1: [insert
appropriate filter statement here] flowid 1:123

#on another server I use (same kernel, just different NIC )...
/sbin/tc filter add dev eth1 protocol 802.1q prio 2 parent 1: [insert
appropriate filter statement here] flowid 1:123


Adding vlan information can change where some data is kept in a packet.
Can't explain in exact detail why I ran into problems, just what I've
discovered.


> > I know traffic shaping does work on vlans with the class based queues
> > because I use it every day. But all my tc statements are applied on a
> > real physical interface and not the vlan interface; I could never get
> > tc to work on vlan interfaces directly.
> 
> For what's it worth, I've been applying netem queuing disciplines to many
> different VLAN interfaces and have been getting exactly the expected
> results (the packet loss % is right on, etc).  Could you think of anything
> different with a tbf that fails?
> 

Not sure on that one. tbf does have a lot of "nobs" to turn in its
configuration, though, and I've not used netem.


> > Just a guess, but I bet you'd get the rate limiting you expect on
> > your vlan by applying the tbf rate limit on interface eth1 instead of
> > the vlan interface. If so, and if your goal is to rate limit by vlan,
> > then you will likely need to go with a class based queueing
> > discipline like htb and then define traffic filters to limit each
> > vlan to the rate you wish.
> 
> Yes the goal is to limit by VLAN.  I will try what you suggested, i.e.
> limit the traffic on the physical interface instead and I'll report back. 
> But I hope that won't be the solution! :)
> 
Limiting on the physical interface will allow you to group vlans under a
common rate limit. Can be useful.






> 
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >> ----------------------------------------------------------------------
> >>
> >> Message: 1
> >> Date: Tue, 21 Aug 2007 23:32:18 -0700
> >> From: sting <sting@xxxxxxxxxxxxx>
> >> Subject:  simple tbf rate clamping issues
> >> To: LARTC@xxxxxxxxxxxxxxx
> >> Message-ID: <46CBD872.6060307@xxxxxxxxxxxxx>
> >> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> >>
> >> Hello,
> >>
> >> I was attempting to throttle egress traffic to a specific rate using a
> >> tbf.  As  a starting point I used an example from the LARTC howto,
> >> which
> >> goes:
> >>
> >> tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540
> >>
> >> I then attempt a large fetch from another machine via wget (~40 megs)
> >> and the rate was clamped down to about 12Kbytes/s.  As this seemed too
> >> much, I gradually increased the latency up to 200ms which then gave me
> >> the expected results (~34Kbytes/s).
> >>
> >> I then applied this queuing discipline on a machine acting as a
> >> gateway/router for a few VLANed subnets.  The tbf was applied on
> >> interface eth1.615.  From another workstation I attempted a wget,
> >> and so
> >> the traffic had to go through the gateway/router.  The download rate
> >> went from 16 Mbytes/s down to about 1.6 Mbytes/s, but was much much
> >> higher than what I'm trying to clamp it down to.
> >>
> >> Two questions:
> >> 1/ My main question. AFAIK, queuing disciplines affect egress traffic
> >> whether that traffic originates from the host or is being forwarded.
> >> Assuming that the fact the tbf is mostly meant to be applied to
> >> forwarded traffic is not an issue, *is there anything else that could
> >> cause the transfer rate not to be correctly clamped down?*  What
> >> parameters should I be playing with?
> >>
> >> 2/ I'm assuming the first example I quoted must have worked as
> >> described
> >> when the HOWTO was initially written a few years ago.  In any case,
> >> i am
> >> assuming with 50ms max latency outgoing packets could not be held long
> >> enough in the tbf and had to be droppd, correct?
> >>
> >> Thank you,
> >> sting
> >>
> >
> >
> 
> 
-- 
Bryan Schenker
Director
ResTech Services
www.restechservices.net
608-663-3868

_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux