HTB and PRIO qdiscs introducing extra latency when output interface is saturated

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Im using a Linux machine with standard pc hardware with 3 seperate PCI
network interfaces to operate as a DiffServ core router using Linux
traffic control. The machine is a P4 2.8ghz, 512mb RAM running fedora
core 3 with the 2.6.12.3 kernel. All links and network interfaces are
full duplex fast ethernet. IP forwarding is enabled in the kernel. All
hosts on the network have their time sychronised using a stratum 1
server on the same VLAN. Below is a ascii diagram of the network. 

(network A) edge router ------>core router---->edge router (network C)
                                    ^
                                    |
                                    |
                               edge router
                               (network B)

Core Router Configuration:
---------------------------
The core router implements the expedited forwarding PHB. I have tried 2
Different Configurations.
1. HTB qdisc with two htb classes. One which services VoIP traffic
(marked with EF codepoint) VoIP traffic is guaranteed to serviced at a
minimum rate of 1500 kbit. This htb class is serviced by a fifo queue
with a limit of 5 packets. The 2nd htb class guarantees all other
traffic to serviced at a minimum rate of 5mbit. The RED qdisc services
this htb class.

2. PRIO qdisc with token a bucket filter to service VoIP traffic (marked
with EF codepoint) VoIP traffic with a guaranteed minimum rate of 1500
kbit. A RED qdisc to service all other traffic.

Test 1.
---------------------------
VoIP traffic originates from network A and is destined to network C. The
throughput of VoIP traffic is 350 kbit. No other traffic passes through
the core router during this time. These Voip packets are marked with the
EF codepoint. Using either of the above mentioned configurations for the
core router, the delay of the VoIP traffic in travelling from network A
to network C passing through the core router is 0.25 milliseconds.

Test 2.
---------------------------
Again VoIP traffic originates from network A and is destined to netwotk
C with a throughput of 350 kbit. TCP traffic also originates from
another host in network A and is destined for another host in network C.
More TCP traffic originates from network B and is destined to network C.
This TCP traffic is from transfering large files through http. As a
result a bottleneck is created at the outgoing interface of the core
router to network C. The combined TCP traffic from these sources is
nearly 100 mbit. Using either of the above mentioned configurations for
the core router, the delay of the VoIP traffic in travelling from
network A to network C passing through the core router is 30ms
milliseconds with 0% loss. There is a considerable amount of TCP packets
dropped.

Could anyone tell me why the delay is so high (30ms) for VoIP packets
which are treated with the EF phb when the outgoing interface of core
router to network c is saturated ?

Is it due to operating system factors ?
Has anyone else had similar experiences ?

Also I would appreciate if anyone could give me performace metrics as to
approximately how many packets per second a router running Linux with
standard pc hardware can forward. Or even mention any factors that would
affect this performance. Im assume the system interrupt frequncy HZ will
affect performance in some way.

Jonathan Lynch


Note: I already posted the same question to the list a few weeks back
but got no reply. I have reworded my question so it is clearer.

-----------------------------------------------------------------------------------------------
The config I used for each setup is included below. These are slight
modifications that are supplied with iproute2 source code.

Config 1 using htb
-------------------
tc qdisc add dev $1 handle 1:0 root dsmark indices 64 set_tc_index
tc filter add dev $1 parent 1:0 protocol ip prio 1 tcindex mask 0xfc
shift 2

Main htb qdisc & class
tc qdisc add dev $1 parent 1:0 handle 2:0 htb
tc class add dev $1 parent 2:0 classid 2:1 htb rate 100Mbit ceil 100Mbit

EF Class (2:10)
tc class add dev $1 parent 2:1 classid 2:10 htb rate 1500Kbit ceil
100Mbit
tc qdisc add dev $1 parent 2:10 pfifo limit 5
tc filter add dev $1 parent 2:0 protocol ip prio 1 handle 0x2e tcindex
classid 2:10 pass_on

BE Class (2:20)
tc class add dev $1 parent 2:1 classid 2:20 htb rate 5Mbit ceil 100Mbit
tc qdisc add dev $1 parent 2:20 red limit 60KB min 15KB max 45KB burst
20 avpkt 1000 bandwidth 100Mbit probability 0.4
tc filter add dev $1 parent 2:0 protocol ip prio 2 handle 0 tcindex mask
0 classid 2:20 pass_on

Config 2 using PRIO
-------------------
Main dsmark & classifier
tc qdisc add dev $1 handle 1:0 root dsmark indices 64 set_tc_index
tc filter add dev $1 parent 1:0 protocol ip prio 1 tcindex mask 0xfc
shift 2

Main prio queue
tc qdisc add dev $1 parent 1:0 handle 2:0 prio
tc qdisc add dev $1 parent 2:1 tbf rate 1.5Mbit burst 1.5kB limit 1.6kB
tc filter add dev $1 parent 2:0 protocol ip prio 1 handle 0x2e tcindex
classid 2:1 pass_on

BE class(2:2)
tc qdisc add dev $1 parent 2:2 red limit 60KB min 15KB max 45KB burst 20
avpkt 1000 bandwidth 100Mbit probability 0.4
tc filter add dev $1 parent 2:0 protocol ip prio 2 handle 0 tcindex mask
0 classid 2:2 pass_on


_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux