Hello,
I was attempting to throttle egress traffic to a specific rate using a
tbf. As a starting point I used an example from the LARTC howto, which
goes:
tc qdisc add dev eth1 root tbf rate 220kbit latency 50ms burst 1540
I then attempt a large fetch from another machine via wget (~40 megs)
and the rate was clamped down to about 12Kbytes/s. As this seemed too
much, I gradually increased the latency up to 200ms which then gave me
the expected results (~34Kbytes/s).
I then applied this queuing discipline on a machine acting as a
gateway/router for a few VLANed subnets. The tbf was applied on
interface eth1.615. From another workstation I attempted a wget, and so
the traffic had to go through the gateway/router. The download rate
went from 16 Mbytes/s down to about 1.6 Mbytes/s, but was much much
higher than what I'm trying to clamp it down to.
Two questions:
1/ My main question. AFAIK, queuing disciplines affect egress traffic
whether that traffic originates from the host or is being forwarded.
Assuming that the fact the tbf is mostly meant to be applied to
forwarded traffic is not an issue, *is there anything else that could
cause the transfer rate not to be correctly clamped down?* What
parameters should I be playing with?
2/ I'm assuming the first example I quoted must have worked as described
when the HOWTO was initially written a few years ago. In any case, i am
assuming with 50ms max latency outgoing packets could not be held long
enough in the tbf and had to be droppd, correct?
Thank you,
sting
_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc