Andy Furniss wrote:
Leslie Patrick Polzer wrote:
Hello,
I have a serious problem with HTB which I wasn't able to solve myself.
I run a masquerading router with ppp0 as interface to the Internet. Three clients need to share a downstream of 1 MBit, which I want to divide with tc. When I see a packet being forwarded to one of these clients, I give it the appropriate unique mark:
iptables -t mangle -A FORWARD -d 192.168.34.141 -j MARK --set-mark 1 iptables -t mangle -A FORWARD -d 192.168.34.140 -j MARK --set-mark 2 iptables -t mangle -A FORWARD -d 192.168.1.2 -j MARK --set-mark 3
Because it might be of interest: 192.168.34.0/24 is on network A with 10 MBit, 192.168.1.0/24 is on network B with 100 MBit.
I then attach an IMQ device imq0 to the FORWARD table:
You can't use IMQ in forward AFAIK, see
http://www.docum.org/docum.org/kptd/
Hmmm, really? I mean, all intended packets are going through it, no errors whatsoever. They are being marked correctly by iptables and tc filter classifies according to mark. The only problem seems to be the excess bandwidth distribution, which leaves me to the question:
How could the hooks of IMQ and the excess bandwidth distribution of HTB relate in this setup?
I hope you are understanding that I do not question your knowledge. I'm just not fully persuaded of this yet, so I'd like to discuss it a bit more.
You are right to question me :-) - I was thinking a bit too much about my setup (At least I know that works). I use IMQ on ppp so I can shape traffic headed for local processes as well as forwarded. If you don't need to do this then you don't need to do it in prerouting anyway.
I am guessing that calling IMQ from forward uses postrouting which is OK for your needs. I know from a test I did in prerouting that IMQ doesn't respect where in a table it gets called from. You could test by seeing if you can shape locally generated traffic marked in output I suppose.
Wherever it hooks you need to set a rate less than link speed and if you use an old kernel, patch HTB. I said shaping from the wrong end of the bottleneck is a kludge because if I shape from the fat end then I control exactly what happens - I can arrange for my latency never to be increased by more than the time it takes for a packet my MTU long to be sent at my bitrate. As long as I tweak for link overheads I can use nearly 100% bandwidth.
Incoming traffic from my ISP has already been through a 600ms fifo - it's never going to arrive at more than my link speed, so I need to set the ceils/rate totals to less than link speed - how much less will determine how fast the queue fills. The behavior of various types of queues is probably not the same as if they were at the other end of the bottleneck.
There are also factors out of my control - TCP can get bursty when acks get buffered elsewhere. There may be packets in long buffers (mainly P2P) headed for me which are unstoppable, and my queue may not have any packets from active connections at any given time. The queue also reacts too late when the bandwidth changes - A new connection will be in TCP slowstart, which quite quickly will increase rate causing a temporary filling of ISP buffer - which hurts latency. It doesn't fill enough to cause drops, though, so as far as bandwidth allocation goes it's OK.
My queues also drop a bit too much when this happens - causing TCP to resync which can be bursty.
Andy.
And thanks a lot for the additional information you gave me!
Kind regards,
Leslie
_______________________________________________ LARTC mailing list / LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/