Re: how to get the latency down on maxed out classes? + extra question

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've been using HTB for a while now and been tweaking and experimenting with
it a lot. I haven't got it perfect yet, but it's a lot better than nothing
(both terms of bandwidth efficiency and latency under load).

Basically what I do is create a few classes with different priorities: 0, 1,
2 and 3 (lower value, higher priority). Then I add a filter rule to each
class that sends all packets with a specific iptables MARK to that class.
Then I create some iptables rules that put the traffic in the correct class,
which means that all game traffic (and for testing all ICMP packets) go into
priority 0. Ssh, telnet, ACK packets, etc. go into 1. Http, ftp go into 2
and all other packets (kazaa/direct connect and other unknown stuff) is left
with priority 3. I limit the total upstream traffic to 100kbit (of the
128kbit my ISP provides me). I'm pretty sure I could go up a little higher
when I get the details tweaked right. I set rates and ceilings for each
class, although (in theory) the priorities should take care of it and it
should be possible to have each class use what they want.

My question here is: why does HTB not permit me to use more than 4
priorities? Some documentation I've seen says I should be able to go as high
as priority 7. Maybe my HTB version is too old? Anyway, my linux box is a
mess, I'll re-install it sometime soon and start using 2.4.20.

I use iptables rules like these for the marking:
iptables -I PREROUTING -t mangle -i eth2 --jump MARK --set-mark 1 -p ICMP
iptables -I PREROUTING -t mangle -i eth0 --jump MARK --set-mark 1 -p ICMP


Another trick I use is reducing the maximum (tcp) packet size, using (which
is good for the latency). I've seen other scripts that do the same using
different MTU settings and even custom routing tables with maximum MTU size.
I haven't really thought about the differences much, but this trick
effectively gives the same results:
iptables -I PREROUTING -t mangle -i eth2 --jump TCPMSS --set-mss 700 -p
TCP --tcp-flags SYN,RST SYN
iptables -I PREROUTING -t mangle -i eth0 --jump TCPMSS --set-mss 700 -p
TCP --tcp-flags SYN,RST SYN

And another important thing to do is change a simple kernel setting, which
improves HTB accuracy enormously, which is nice when you're tweaking the
settings: in /usr/src/linux/include/net/pkt_sched.h change
PSCHED_CLOCK_SOURCE to PSCHED_CPU if you have a cpu with timestamp counter
(TSC) that will give you Mhz timer granularity.

Don't forget to change the interfacenames in the above lines (and the script
below).

After having used a pretty complicated script, I recently started rewriting
it, so far I made this (no sfq or other stuff, just bare classes).

#!/bin/sh

INET_INTERFACE="eth0"

PRIOS='0 1 2 3'

TC_UPLINK_RATE="100"
TC_RATE[0]=50
TC_CEIL[0]=110
TC_RATE[1]=15
TC_CEIL[1]=60
TC_RATE[2]=10
TC_CEIL[2]=60
TC_RATE[3]=5
TC_CEIL[3]=30

# Packet marks (which prioritiy is linked to which MARK value)
MARK[0]="1"
MARK[1]="2"
MARK[2]="3"
MARK[3]="4"


# Executables and stuff
IPTABLES="iptables"
TC="tc"
IP="ip"
LOG="/dev/null"

# Comment the next two lines to really run the tc commands.
TC="echo tc"
LOG="/dev/stdout"

# Find last prio, which will be the default
for PRIO in $PRIOS
do
 LAST_PRIO=$PRIO
done

# The TC part
$TC qdisc del dev $INET_INTERFACE root

$TC qdisc add dev $INET_INTERFACE root handle 1:0 htb default
$[$LAST_PRIO+10]
$TC class add dev $INET_INTERFACE parent 1:0 classid 1:1 htb rate
${TC_UPLINK_RATE}kbit

for PRIO in $PRIOS
do
 echo -e "\n***** Prio $PRIO *****" > $LOG
 # Add leaf classes for PRIO traffic
 $TC class add dev $INET_INTERFACE parent 1:1  \
   classid 1:$[$PRIO+10] htb                 \
   rate ${TC_RATE[$PRIO]}"kbit"              \
   ceil ${TC_CEIL[$PRIO]}"kbit"              \
     prio $PRIO

 # Now filter PRIO traffic to this leaf
 $TC filter add dev $INET_INTERFACE parent 1:0         \
     protocol ip prio $PRIO handle ${MARK[$PRIO]}      \
   fw flowid 1:$[$PRIO+10]
done

Anyway, hope it helps someone.

Jannes Faber

----- Original Message -----
From: "Don Cohen" <don-lartc@isis.cs3-inc.com>
To: <lartc@mailman.ds9a.nl>; <abz@frogfoot.net>
Sent: Sunday, December 08, 2002 6:30 PM
Subject:  how to get the latency down on maxed out classes?


> > lets say I want to limit traffic to/from client to 64kbit. now, client
opens
>  > a tcp connection blasting away at full speed.
>  >
>  > If client now pings isp, it gets on average around 7 seconds latency. I
>  > tried to improve this by using SFQ on the leaf nodes of my HTB
hierarchy,
>  > but that does not really improve the situation, only makes it much
worse.
>  > with SFQ I get anything between 250ms and 13 seconds latency.
>
> You understand what's going on here?
> As I recall, both pfifo and sfq default to queues of length 128
> packets.  If you fill that with 1500 byte packets you have ~200Kbytes
> which is about 1.6Mbits.  At 64Kbit/sec that would take ~30 sec to
> send so your latency could be as high as 30 sec.
> You can limit this latency by reducing the queue size.
>
> On the other hand, the application that fills the queue evidently
> doesn't mind large latency.  Otherwise it wouldn't fill the queue.
>
> I think I posted to this list once a description (maybe even the
> code?) of another way to limit latency - drop packets that have been
> in the queue for more than a timeout period (I tend to use 3 sec).
>
> SFQ should have the desirable result that one tcp connection won't
> slow down another one or a ping.
>
>  > I then tried fifos. With small packet fifos the packet loss is just
>  > to great to be of any use and even then the latency is quite high
(~200ms).
> You consider 200ms high?  One max size packet = 1500 bytes = 12Kbit
> which is about 200ms on a 64Kbit link.  You can't expect to do better.
>
>  > I'm thinking of using RED, but the number of parameters is daunting and
I
>  > have no idea how the HTB rate correlates to packet size and burst rates
for
>  > red.
> RED should be independent of HTB.
>

_______________________________________________
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux