Re: Re: HFSC

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Patrick McHardy wrote:
> Late reply but here it is ;)

No worries - it wasn't exactly brief and I did have other stuff to
spend my time on.

BTW the Blue scheduler patch for 2.6 seems to be working nicely - but
I haven't had the time to run the tests on it that I wished to, so I
haven't posted anything further about it.

> Nuutti Kotivuori wrote:
>> Patrick McHardy wrote:

[...]

> I think it can be expressed easier like this:
>
> b(t) =
> {
> m1 * t   			t <= d
> m1 * d + m2 * (t - d)	        t > d
> }
>
> b_rt(t) <= b_ls(t) <= b_ul(t) for all t >= 0

Yes, certainly - I just wished to eliminate t from it all.

> No error is signalled when these are violated.

Right.

> The later is correct, the class will participate in link-sharing,
> but will only be selected by the real-time criterion under full
> load.  It will also be punished later wrt. excess bandwidth as long
> as the parent constantly stays active.

Ah, yes. Makes perfect sense.

> It will still respect the upper-limit curve, but I'm not sure about
> the consequences for sibling-classes and the parent's active child
> list, I need to think about this some more. In any case it's not
> advisable to do so.

Okay.

> The sum of all realtime service curves must not be bigger than the
> service curve of the link itself, otherwise the service can't be
> guaranteed.

Nod.

> For link-sharing curves, it's actually not important that they don't
> exceed their parent because they only define a share, not an
> absolute amount of service. Only the relative differences between
> siblings matter.

Makes sense.

> Adding n curves gives you (in the worst case) a (n+1)-ary curve, you
> can calculate it like this:
>
> sc1: m1 = 100kbit, d = 1s, m2 = 200kbit
> sc2: m1 = 50kbit, d = 0.25s, m2 = 300kbit
> sc3: m1 = 200kbit, d = 1.5s, m2 = 500kbit
> -----------------------------------------
> m =
> {
> 	350kbit		d <= 0.25s
> 	600kbit		0.25s < d <= 1s
> 	700kbit		1s < d <= 1.5s
> 	1000kbit	d > 1.5s
> }

Right. I think there's a need for a small tool to make these
calculations - and perhaps even to automatically be able to scale
other curves to maintain the restrictions. But that is for the future,
manual calculation will do for now.

> If it is possible to fulfill all demands with the available
> excess-bandwidth than there is no difference. The real difference is
> of a different kind. A parents link-sharing service curve might be
> violated by the real-time criterion used for one of it's
> children. The parents' siblings will suffer from this as well
> (link-sharing wise) because they share the same parent and part of
> the service given to all siblings of this parent has been used in
> violation of link-sharing, so link-sharing only leaves will
> suffer. An example for this is in the HFSC paper on page 6.

Right, I think I understand this now.

>>> - The sum of all real-time curves must not exceed 100%.
[...]
> Yes, actually link capacity. Already explained above.
>
>> And what happens if they do?
[...]
> Nothing bad will happen, only the guarantees can't be met anymore.
> It will still pick the class with the smallest deadline.

Okay, this is good to know.

>> If only relative difference matters, why must link-sharing service
>> curves be larger than real-time service curves? And smaller than
>> upper-limit service curves?
>
> They don't. It just makes it simpler to assure that the service
> given to a class is at least the amount defined by the real-time
> curve, which is usually what you want.

Exactly.

>> I also seem to have a lot of difficulties in trying to simulate the
>> behaviour of the qdisc.
[...]
> Have you applied the patches from trash.net/~kaber/hfsc/tcsim ?
> With HZ=1000 and PSCHED_GETTIMEOFDAY as clocksource I got very
> good results.

I tried HZ=1000 and HZ=100 both, and the results were odd. But I think
I didn't touch the clocksource at all. I will try later on with
PSCHED_GETTIMEOFDAY as well.

>> Also, something as trivial as this:
>> tc qdisc add dev $DEV root handle 1: hfsc default 1
>> tc class add dev $DEV parent 1: classid 1:1 hfsc rt m2 100kbps
>> seems to work for 'eth0' but not 'lo' interface, where as for example
>> the 'tbf' qdisc does work for 'lo' as well. If I run those commands on
>> 'lo', every packet shows up as dropped by the qdisc.
>
> Works ok here .. do you mean inside tcsim ?

No, I don't mean inside tcsim. Here is a full transcript:

*****
shiro:~# export DEV=lo
shiro:~# tc -s -d qdisc show dev $DEV
shiro:~# ping localhost -c 4
PING shiro.i.naked.iki.fi (127.0.0.1) 56(84) bytes of data.
64 bytes from shiro.i.naked.iki.fi (127.0.0.1): icmp_seq=1 ttl=64 time=0.062 ms
64 bytes from shiro.i.naked.iki.fi (127.0.0.1): icmp_seq=2 ttl=64 time=0.059 ms
64 bytes from shiro.i.naked.iki.fi (127.0.0.1): icmp_seq=3 ttl=64 time=0.059 ms
64 bytes from shiro.i.naked.iki.fi (127.0.0.1): icmp_seq=4 ttl=64 time=0.060 ms

--- shiro.i.naked.iki.fi ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.059/0.060/0.062/0.001 ms

shiro:~# tc qdisc add dev $DEV root handle 1: hfsc default 1
shiro:~# tc class add dev $DEV parent 1: classid 1:1 hfsc rt m2 100kbps
shiro:~# ping localhost -c 4
PING shiro.i.naked.iki.fi (127.0.0.1) 56(84) bytes of data.
^C
--- shiro.i.naked.iki.fi ping statistics ---
0 packets transmitted, 0 received

shiro:~# tc -s -d qdisc show dev $DEV
qdisc hfsc 1: default 1 
 Sent 0 bytes 0 pkts (dropped 17, overlimits 0) 

shiro:~# tc -s -d class show dev $DEV
class hfsc 1: root 
 Sent 0 bytes 0 pkts (dropped 0, overlimits 0) 
 period 0 level 1 

class hfsc 1:1 parent 1: rt m1 0bps d 0us m2 800Kbit 
 Sent 0 bytes 0 pkts (dropped 17, overlimits 0) 
 period 0 level 0 

shiro:~# tc qdisc del dev $DEV root
shiro:~# ping localhost -c 4
PING shiro.i.naked.iki.fi (127.0.0.1) 56(84) bytes of data.
64 bytes from shiro.i.naked.iki.fi (127.0.0.1): icmp_seq=1 ttl=64 time=0.060 ms
64 bytes from shiro.i.naked.iki.fi (127.0.0.1): icmp_seq=2 ttl=64 time=0.060 ms
64 bytes from shiro.i.naked.iki.fi (127.0.0.1): icmp_seq=3 ttl=64 time=0.057 ms
64 bytes from shiro.i.naked.iki.fi (127.0.0.1): icmp_seq=4 ttl=64 time=0.059 ms

--- shiro.i.naked.iki.fi ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.057/0.059/0.060/0.001 ms

shiro:~# uname -a
Linux shiro 2.6.3-shiro-1 #1 Sat Mar 6 21:03:38 EET 2004 i686 GNU/Linux
*****

So, I don't know what's up with that. I will debug further.

Anyhow, I will try now to come up with a working setup of HFSC for my
personal use, and in the process I will try to document a sane method
for coming up with the service curves and setting up the whole thing.

If that works out, more comprehensive documentation can come later.

This has been very helpful, thank you!

-- Naked
_______________________________________________
LARTC mailing list / LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux