sting wrote:
Hello,
I was attempting to throttle egress traffic to a specific rate using a
tbf. As a starting point I used an example from the LARTC howto, which
goes:
tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540
It's not the best example as latency is a way of setting buffer
length(limit) and 50ms @ 220kbit is < 1500 bytes. If you set < 1514/1518
explicitly with limit you would not pass bulk packets at all. I guess it
rounds it up a bit if you use latency.
I then attempt a large fetch from another machine via wget (~40 megs)
and the rate was clamped down to about 12Kbytes/s. As this seemed too
much, I gradually increased the latency up to 200ms which then gave me
the expected results (~34Kbytes/s).
I would expect that, tcp doesn't like one packet/short buffers, and it's
even worse on a lan than a wan as (linux?)tcp behaves differently when
it detects low latenccy.
I then applied this queuing discipline on a machine acting as a
gateway/router for a few VLANed subnets. The tbf was applied on
interface eth1.615. From another workstation I attempted a wget, and so
the traffic had to go through the gateway/router. The download rate
went from 16 Mbytes/s down to about 1.6 Mbytes/s, but was much much
higher than what I'm trying to clamp it down to.
I just tested a tbf on a vlan and it seems OK - if you see 1.6 Mbytes
and tbf is 220kbit maybe you are shaping in the wrong direction and just
getting the acks? (OK I am just guessing here)
What does tc -s qdisc ls dev eth1.615 say?
Two questions:
1/ My main question. AFAIK, queuing disciplines affect egress traffic
whether that traffic originates from the host or is being forwarded.
Assuming that the fact the tbf is mostly meant to be applied to
forwarded traffic is not an issue, *is there anything else that could
cause the transfer rate not to be correctly clamped down?* What
parameters should I be playing with?
One possible difference, though it's probably not your problem.
If you have a nic that does tcp segmentation offload, then locally
generated traffic may go through as supersize "packets" which makes htb
go over rate. I am not sure what tbf would do - maybe just drop them if
the buffer is not long enough.
2/ I'm assuming the first example I quoted must have worked as described
when the HOWTO was initially written a few years ago. In any case, i am
assuming with 50ms max latency outgoing packets could not be held long
enough in the tbf and had to be droppd, correct?
Yep, also that example was on a ppp wan IIRC.
If you put anything on the root of eth/vlan you need to remember that
you are going to be catching arp aswell as ip traffic.
Andy.
_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc