Quick question I've been trying to figure out myself without success:
can I attach a qdisc to a qdisc instead of a qdisc to a class? Be
nice to chain a few qdiscs together...
Anyway, in order to divide up traffic like that, you'll need to limit
bandwidth for the reason that splitting up traffic by priority is not
sufficient to reduce latency. You may be sending your most latency
sensitive traffic first, but if it's stuck in a queue in your cable
modem behind everything else that was already there, it won't do you
a damn bit of good. The following for me took my pings (and ssh
packets) from about 2.5 _seconds_ of latency under full upload
conditions to the best times I could get under no load (~40ms from
here to google). Note that br0 is my bridge where eth0 is the part
touching the internet. I both NAT and bridge hosts...
#Masquerade ball!
iptables -t nat -F
iptables -t mangle -F
iptables -t nat -A POSTROUTING -o br0 -j MASQUERADE
#Setup general policing goodness
tc qdisc del dev eth0 root
tc qdisc add dev eth0 root handle 1: htb default 10
#---- My upload is 384 kilobit from my testing. 380000 under-runs it
"just enough"
tc class add dev eth0 parent 1: classid 1:1 htb rate 380kbit
#General traffic
tc class add dev eth0 parent 1:1 classid 1:10 htb rate 120kbit ceil
380kbit prio 2
#Limit general traffic backlog
#---- I might want to increase this... it's a buffer that limits the
backlog on my default class because it tends to hold too much.
tc qdisc add dev eth0 parent 1:10 handle 100: bfifo limit 12000b
#Priority (small) traffic -- UDP, small SSH, ICMP, small ACK, SYNs
#---- Capped at about 1/3 since it should remain small traffic only.
Note that no large TCP packets are entered into this class. ICMP,
DNS, and small packets should never max it out, and therefore it
should never need more than about 1/3 of the link's bandwidth
tc class add dev eth0 parent 1:1 classid 1:11 htb rate 120kbit prio 0
#Common bulk interactives
tc class add dev eth0 parent 1:1 classid 1:12 htb rate 140kbit ceil
380kbit prio 2
tc qdisc add dev eth0 parent 1:12 handle 120: sfq perturb 10
#Let iptables tag things
#Prority (small) queue
tc filter add dev eth0 protocol ip parent 1:0 prio 1 handle 1 fw
flowid 1:11
#HTTP Queue
tc filter add dev eth0 protocol ip parent 1:0 prio 2 handle 2 fw
flowid 1:12
#Small packets are fast packets
iptables -t mangle -A POSTROUTING -m length --length 0:128 -j MARK --
set-mark 0x1
iptables -t mangle -A POSTROUTING -m length --length 0:128 -j RETURN
iptables -t mangle -A POSTROUTING -p icmp -j MARK --set-mark 0x1
#certain ports go certain places
iptables -t mangle -A POSTROUTING -p tcp --dport 80 -j MARK --set-
mark 0x2
iptables -t mangle -A POSTROUTING -p tcp --dport 443 -j MARK --set-
mark 0x2
iptables -t mangle -A POSTROUTING -p tcp --dport 5190 -j MARK --set-
mark 0x2
iptables -t mangle -A POSTROUTING -p tcp --sport 22 -j MARK --set-
mark 0x2
iptables -t mangle -A POSTROUTING -p tcp --dport 22 -j MARK --set-
mark 0x2
#DNS gets the faster lane
iptables -t mangle -A POSTROUTING -p udp --dport 53 -j MARK --set-
mark 0x1
On Dec 3, 2005, at 1:04 AM, Andreas Klauer wrote:
On Friday 02 December 2005 23:24, Brian J. Murrell wrote:
Yeah, that is what I want, but why do I need HTB?
You need it only if you also want to limit bandwidth somehow.
I guess I am missing the reasoning for partitioning up the bandwidth
with HTB rather than just letting everyone/everything have an
opportunity to use the full bandwidth as long as something/
somebody more
important is not using it.
Imagine a network where every machine tries to send data at much
higher
rates than your total bandwidth allows. This may cause packet queues
building up at your router, or worse, at your modem or provider. These
queues have to empty themselves first before a new packet can be sent,
which can cause a lot of additional delay depending on queue size.
In that scenario, it's important to take control over this building up
queue, which you can do by limiting bandwidth using HTB or similar
(so the
queue will be in your router, not somewhere else), by making your
router
the bottleneck.
Surely it will be connection based fairness within the priority
class.
I haven't looked at the code, but I think it's just a plain fifo
queue,
unless you attach SFQ or similar to replace it.
Oh? So one ssh could starve another? Why? Are the outbound SSH
packets not just put to the front of the queue in FIFO order?
That's what I thought.
HTH
Andreas
_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc