Wonder shaper wierdness.

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm troubleshooting a modified version of the wondershaper and am
encountering some trouble that I was hoping you could help me look into.

I have 4 queues that I add traffic to:

1.  VoIP phone stuff
2.  ssh traffic
3.  HTTP
4.  everything else (should be default).

Latency is definitely increased when I run it.  I was trying the catch
all rule (#4) and removed the specific rule for HTTP.  Presumably this
should mean that http traffic is caught by rule 4.

I'm not sure what happens to http traffic, but I do know that I have a
horrible latency after removing a specific rule to catch http.

Could you possibly look over my rules and give me some feedback.

I'm also confused by your Stochastic Fairness rules.  I don't understand
why you need them if we are using HTB as our qdisc.

S.

-- 
_ pictures ________________________________ http://www.imaginator.com
_ contact _________________________ http://imaginator.com/contact.php
#!/bin/sh
# Based on the Wonder Shaper, with modifications by simon@imaginator.com
# this script sets up 4 HTB queues and assigns traffic of diffrent priority to
# each HTB.  The script also shapes incoming traffic to stop buffering at our
# ISP which kills latency.

# I have a 1.5Mbps downlink and a 384Kbps uplink ADSL connection.
# The central office is close enought that I get the full speed.
# I set the link up for a slightly lower speed each way.

# This is the inboud and outbound speed in Kbps
DOWNLINK=1300
UPLINK=300
DEV=eth0


if [ "$1" = "start" ]
then
echo "installing rules";
	# uplink first

	# install root HTB, point default traffic to 1:30:
	tc qdisc add dev $DEV root handle 1: htb default 40

	# shape everything at $UPLINK speed - this prevents huge queues in your
	# DSL modem which destroy latency:
	tc class add dev $DEV parent 1: classid 1:1 htb rate ${UPLINK}kbit burst 6k

	# setup limit queues
	# I believe that this sets up 4 buckes with bucket one having priority
	# over bucket 4.  Bucket one is emptied before anything in bucket 2 is
	# touched.

	tc class add dev $DEV parent 1:1 classid 1:10 htb rate $[10*$UPLINK/10]kbit \
		burst 6k prio 1

	tc class add dev $DEV parent 1:1 classid 1:20 htb rate $[10*$UPLINK/10]kbit \
		burst 6k prio 2

	tc class add dev $DEV parent 1:1 classid 1:30 htb rate $[10*$UPLINK/10]kbit \
		burst 6k prio 3

	tc class add dev $DEV parent 1:1 classid 1:40 htb rate $[10*$UPLINK/10]kbit \
		burst 6k prio 4

	# the original wonder shaper had these lines in and they seemed superfluous
	# because we are using HTB:

	# all get Stochastic Fairness:
	#tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10
	#tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10
	#tc qdisc add dev $DEV parent 1:30 handle 30: sfq perturb 10

	# We now have 4 queues that we can start assigning traffic to
	# I think that this works like iptables rules ie scanning through
	# the rules until it finds a match and then dumping the packet into 
	# the matched queue

	# This is the ping latency test to see how things are working...
	# Smokeping pings hosts every 5 minutes and records the round trip time.
	# Here packets going to 1 network get top priority and the other network, 
	# a low priority.  The results are at:  
	# http://imaginator.com/cgi-bin/smokeping.cgi?target=Latecy

	tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \
		match ip dst 64.3.149.1 flowid 1:10
	tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \
		match ip dst 64.3.151.1 flowid 1:40

	# VOIP stuff gets top priority
	tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \
		match ip dport 10000 0xffff flowid 1:10

	# ssh connections are second priority for outgoing
	tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \
		match ip dport 22 0xffff  flowid 1:20

	# outbound webpages and my streaming mp3s
	tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \
		match ip sport 80 0xffff  flowid 1:30
	tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \
		match ip sport 7000 0xffff  flowid 1:30

	# rest is 'non-interactive' ie 'bulk' and ends up in 1:40
	tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \
		match ip dst 0.0.0.0/0 flowid 1:40

	# returning traffic is slowed slightly so return packets don't get
	# buffered at the ISP and kill our latency.
	# attach ingress filter:
	tc qdisc add dev $DEV handle ffff: ingress

	# filter *everything* to it (0.0.0.0/0), drop everything that's
	# coming in too fast:
	tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \
		0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1

	# that's it, we're shaping.
fi

if [ "$1" = "status" ]
then
echo "showing status";
	tc -s qdisc ls dev $DEV
	tc -s class ls dev $DEV
	exit
fi

if [ "$1" = "restart" ]
then
	$0 stop
	$0 start
fi

if [ "$1" = "stop" ] 
then 
echo "shutting down";
	# clean existing down- and uplink qdiscs, hide errors
	tc qdisc del dev $DEV root    2> /dev/null > /dev/null
	tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null
fi



[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux