try this: #makes sure you've deleted anything old #you might wanna try running /sbin/tc -s qdisc show dev eth1 to verify your current config. #deletes all qdisc stuff just in case /sbin/tc qdisc del dev eth1 root #define root qdisc /sbin/tc qdisc add dev eth1 root handle 1: htb default 2 #define the default rate class--where everything goes that doesn't match one of your filters /sbin/tc class add dev eth1 parent 1:1 classid 1:2 htb prio 2 rate 1000kbit ceil 1000kbit burst 15k #define the rate you wish to limit the vlan to /sbin/tc class add dev eth1 parent 1:1 classid 1:20 htb prio 2 rate 220kbit burst 15k #now create the filter that puts traffic from that vlan into class 20. 1.2.3.4/24 is a range of IPs, but the filter capabilities are extraordinarily capable if you need to classify traffic some other way. Try replacing "802.1q" with "ip" if it doesn't work /sbin/tc filter add dev eth1 protocol 802.1q prio 2 parent 1: match ip dst 1.2.3.4/24 flowid 1:20 #now run the following command--very useful to confirm traffic is matching your filters since it will tell you how many packets match each filter rule you make: tc -s filter show dev eth1 On Wed, 2007-08-22 at 14:55 -0400, sting wrote: > So I did apply the tbf on the eth1 interface instead of the VLAN > interface, and I saw the same results. Some rate limiting was definitely > occuring, but not down to the rate (220kbit) I was expecting. It was > still much higher (~1 Mbytes/s) with the unclamped rate being about 16 > Mbytes/s. > > Has everyone else otherwise pretty much always obtained transfer rates to > be clamped down to what they expected wir that puts traffic from that vlan into class 20 /sbin/tc filter add dev eth1 protocol 802.1q prio 2 parent 1: ip src 1.2.3.0/24 flowid 1:69 th the tbf? > > thanks. > > > > >> My first guess would be vlans being a problem. I know at least for > >> class based queuing disciplines on vlans, you have to take care to > >> define filters that funnel traffic through a class by selecting > >> 802.1q traffic on the real interface, not the vlan interface. > > > > Wow, why would that be though? If the VLAN is simply presented as an > > interface, and the queuing disciplines work on an interface basis, what is > > it that breaks it? > > > >> I know traffic shaping does work on vlans with the class based queues > >> because I use it every day. But all my tc statements are applied on a > >> real physical interface and not the vlan interface; I could never get > >> tc to work on vlan interfaces directly. > > > > For what's it worth, I've been applying netem queuing disciplines to many > > different VLAN interfaces and have been getting exactly the expected > > results (the packet loss % is right on, etc). Could you think of anything > > different with a tbf that fails? > > > >> Just a guess, but I bet you'd get the rate limiting you expect on > >> your vlan by applying the tbf rate limit on interface eth1 instead of > >> the vlan interface. If so, and if your goal is to rate limit by vlan, > >> then you will likely need to go with a class based queueing > >> discipline like htb and then define traffic filters to limit each > >> vlan to the rate you wish. > > > > Yes the goal is to limit by VLAN. I will try what you suggested, i.e. > > limit the traffic on the physical interface instead and I'll report back. > > But I hope that won't be the solution! :) > > > > > >> > >> > >> > >> > >> > >> > >> > >> > >> > >>> ---------------------------------------------------------------------- > >>> > >>> Message: 1 > >>> Date: Tue, 21 Aug 2007 23:32:18 -0700 > >>> From: sting <sting@xxxxxxxxxxxxx> > >>> Subject: simple tbf rate clamping issues > >>> To: LARTC@xxxxxxxxxxxxxxx > >>> Message-ID: <46CBD872.6060307@xxxxxxxxxxxxx> > >>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed > >>> > >>> Hello, > >>> > >>> I was attempting to throttle egress traffic to a specific rate using a > >>> tbf. As a starting point I used an example from the LARTC howto, > >>> which > >>> goes: > >>> > >>> tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540 > >>> > >>> I then attempt a large fetch from another machine via wget (~40 megs) > >>> and the rate was clamped down to about 12Kbytes/s. As this seemed too > >>> much, I gradually increased the latency up to 200ms which then gave me > >>> the expected results (~34Kbytes/s). > >>> > >>> I then applied this queuing discipline on a machine acting as a > >>> gateway/router for a few VLANed subnets. The tbf was applied on > >>> interface eth1.615. From another workstation I attempted a wget, > >>> and so > >>> the traffic had to go through the gateway/router. The download rate > >>> went from 16 Mbytes/s down to about 1.6 Mbytes/s, but was much much > >>> higher than what I'm trying to clamp it down to. > >>> > >>> Two questions: > >>> 1/ My main question. AFAIK, queuing disciplines affect egress traffic > >>> whether that traffic originates from the host or is being forwarded. > >>> Assuming that the fact the tbf is mostly meant to be applied to > >>> forwarded traffic is not an issue, *is there anything else that could > >>> cause the transfer rate not to be correctly clamped down?* What > >>> parameters should I be playing with? > >>> > >>> 2/ I'm assuming the first example I quoted must have worked as > >>> described > >>> when the HOWTO was initially written a few years ago. In any case, > >>> i am > >>> assuming with 50ms max latency outgoing packets could not be held long > >>> enough in the tbf and had to be droppd, correct? > >>> > >>> Thank you, > >>> sting > >>> > >> > >> > > > > > > _______________________________________________ > > LARTC mailing list > > LARTC@xxxxxxxxxxxxxxx > > http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc > > > > -- Bryan Schenker Director ResTech Services www.restechservices.net 608-663-3868 _______________________________________________ LARTC mailing list LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc