Re: HFSC and prioritization

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Patrick McHardy wrote:

>Alexandru Dragoi wrote:
>
>  
>
>>I think i'd like more docs in english about hfsc.
>>    
>>
>
>Me too. I don't have time to write one myself (and I'm not good at
>this), but I can assist if anyone wants to do it.
>
>
>  
>
>>I would like to know
>>also some tips about scalability at large amount of traffic, like more
>>than 100mbit and more than 20kpps. I had a setup one that share 200mbit
>>on 2 imq devices(both with a parent class of 200mbit), each with about
>>1000 hfsc classes. About 2 classes with only rt courves, another 2 with
>>rt and ls and ul courves, and the rest (end users) with ls and ul
>>courves. All only with m2 parametter. And at high traffic packet loss
>>appeared. After switching to htb, no more packet loss.
>>
>>Thanks in advance.
>>    
>>
>
>
>Mhh .. I know of setups where HFSC is running with 10k classes and high
>bandwidth (>= 100mbit, don't know the exact amount). When switchting to
>rbtrees I made some benchmarks and it performed almost identical to HTB,
>so my guess is that its related to IMQ, which still seems to be pretty
>broken. It could of course also be a different configuration mistake,
>hard to tell without seeing the actual configuration.
>_______________________________________________
>LARTC mailing list
>LARTC@xxxxxxxxxxxxxxx
>http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc
>  
>
Hello, here are some lines of the hfsc script i used.

#!/bin/bash

tc=/sbin/tc

$tc qdisc  add dev imq0 root handle 1: hfsc default 3
$tc class add dev imq0 parent 1:  classid 1:3 hfsc ls m2 200mbit ul m2
200mbit
$tc class add dev imq0 parent 1:  classid 1:2 hfsc ls m2 200mbit ul m2
200mbit


$tc qdisc  add dev imq1 root handle 1: hfsc default 3
$tc class  add dev imq1 parent 1:  classid 1:3 hfsc ls m2 20mbit ul m2
20mbit
$tc class  add dev imq1 parent 1:  classid 1:2 hfsc ls m2 20mbit ul m2
20mbit

$tc qdisc  add dev imq2 root handle 1: hfsc default 3
$tc class  add dev imq2 parent 1:  classid 1:3 hfsc ls m2 200mbit ul m2
200mbit
$tc class  add dev imq2 parent 1:  classid 1:2 hfsc ls m2 200mbit ul m2
200mbit

$tc qdisc  add dev imq3 root handle 1: hfsc default 3
$tc class  add dev imq3 parent 1:  classid 1:3 hfsc ls m2 20mbit ul m2
20mbit
$tc class  add dev imq3 parent 1:  classid 1:2 hfsc ls m2 20mbit ul m2
20mbit


## An important client

$tc class add dev imq0 parent 1:2 classid 1:0x6031 hfsc rt m2 100mbit
$tc qdisc add dev imq0 parent 1:0x6031 sfq
$tc filter add dev imq0 parent 1: protocol ip prio 10 u32 match ip dst
x.y.z.0/22 flowid 1:0x6031

$tc class add dev imq1 parent 1:2 classid 1:0x6031 hfsc rt m2 9mbit
$tc qdisc add dev imq1 parent 1:0x6031 sfq
$tc filter add dev imq1 parent 1: protocol ip prio 10 u32 match ip dst
x.y.z.0/22 flowid 1:0x6031

$tc class add dev imq2 parent 1:2 classid 1:0x6031 hfsc rt m2 100mbit
$tc qdisc add dev imq2 parent 1:0x6031 sfq
$tc filter add dev imq2 parent 1: protocol ip prio 10 u32 match ip src
x.y.z.0/22 flowid 1:0x6031

$tc class add dev imq3 parent 1:2 classid 1:0x6031 hfsc rt m2 9mbit
$tc qdisc add dev imq3 parent 1:0x6031 sfq
$tc filter add dev imq3 parent 1: protocol ip prio 10 u32 match ip src
x.y.z.0/22 flowid 1:0x6031

There were also a client with rt m2 20mbit ls m2 20mbit ul m2 100mbit on
imq0 and im2, then rt m2 5mbit ls m2 5mbit ul m2 10mbit on other 2 imqs.

The rest of clients , arround 1000, has each something like:

$tc class add dev imq0 parent 1:2 classid 1:0x116a hfsc ls m2 16Kbit ul
m2 20480Kbit
$tc qdisc add dev imq0 parent 1:0x116a sfq
$tc filter add dev imq0 parent 1: protocol ip prio 10 u32 match ip dst
a.b.c.d/32 flowid 1:0x116a

$tc class add dev imq1 parent 1:2 classid 1:0x116a hfsc ls m2 16Kbit ul
m2 256Kbit
$tc qdisc add dev imq1 parent 1:0x116a sfq
$tc filter add dev imq1 parent 1: protocol ip prio 10 u32 match ip dst
a.b.c.d/32 flowid 1:0x116a

$tc class add dev imq2 parent 1:2 classid 1:0x116a hfsc ls m2 16Kbit ul
m2 20480Kbit
$tc qdisc add dev imq2 parent 1:0x116a sfq
$tc filter add dev imq2 parent 1: protocol ip prio 10 u32 match ip src
a.b.c.d/32 flowid 1:0x116a

$tc class add dev imq3 parent 1:2 classid 1:0x116a hfsc ls m2 16Kbit ul
m2 256Kbit
$tc qdisc add dev imq3 parent 1:0x116a sfq
$tc filter add dev imq3 parent 1: protocol ip prio 10 u32 match ip src
a.b.c.d/32 flowid 1:0x116a


Some of them has rates half or double than the numbers above, depending
how much they pay.
Last lines of script are:

$tc class change dev imq0 parent 1:  classid 1:3 hfsc ls m2 16Kbit ul m2
256Kbit
$tc class change dev imq1 parent 1:  classid 1:3 hfsc ls m2 8Kbit ul m2
128Kbit
$tc class change dev imq2 parent 1:  classid 1:3 hfsc ls m2 16Kbit ul m2
256Kbit
$tc class change dev imq3 parent 1:  classid 1:3 hfsc ls m2 8Kbit ul m2
128Kbit

Now, the machine has multiple interfaces, i think 2 gigabit cards with
about 5-6 vlans. The ideea of imqs was to shape only traffic that comes
or goes from or to some vlans, traffic between clients being unshaped.
The machine also did bgp with 1400 prefixes learned and default route,
everything running on an 2.6.11 kernel, i think. I'm sure there was a
need for lots of tuning, like u32 hash filters .. and many others. With
this setup on high traffic things went crazy, like packet loss, real
time classes also didn;t get theyr traffic and so on. But ONLY changing
from hfsc to a htb, things worked much better, and the important clients
got theyr guatanteed bandwidth. Since then .. things changed a lot, u32
hash filters are used, with htb, i get a job somewhere else and so on.
:) Any ideas about what i did there are really welcome. Thank You.
_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux