Re: Re: HFSC

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Late reply but here it is ;)

Nuutti Kotivuori wrote:
Patrick McHardy wrote:

When multiple curves are used, the following must hold:
rt <= ls <= ul


If this is for all t, in practise this means:

  if d > 0
     m1(rt) <= m1(ls) <= m1(ul)
     m2(rt) <= m2(ls) <= m2(ul)
     if m1 < m2
       d(rt) >= d(ls) >= d(ul)
     elsif m1 > m2
       d(rt) <= d(ls> <= d(ul)
     else
       d irrelevant
  else
     m1 irrelevant
     m2(rt) <= m2(ls) <= m2(ul)

Am I correct? What happens if these values are violated? Are any
errors signalled?

I think it can be expressed easier like this:


b(t) =
{
   m1 * t   			t <= d
   m1 * d + m2 * (t - d)	t > d
}

b_rt(t) <= b_ls(t) <= b_ul(t) for all t >= 0

No error is signalled when these are violated.

Also, I have very little clue why these must hold as such.

Obviously if a link-sharing curve is smaller than the real-time curve,
then when the class participates in link-sharing, it would have to
have sent less than it already has. But if this is so, does the
algorithm break totally, or does that only mean that the class does
not participate in link-sharing before the excess bandwidth share for
the class based on the relative link-sharing service curve goes above
the real-time service curve? The latter would not necessarily an
unwanted behaviour.

The later is correct, the class will participate in link-sharing, but will only be selected by the real-time criterion under full load. It will also be punished later wrt. excess bandwidth as long as the parent constantly stays active.

Then if the upper-limit service curve is smaller than the link-sharing
curve, what would this cause? Naive assumptions would lead me to think
that it would merely mean that the class participates in link-sharing
based on the relative service curve it has, but never ends up taking
more than what the upper-limit service curve dictates. Eg. in a case
with a relatively large link-sharing service curve and a smaller
upper-limit service curve the class would get a big share out of a
small amount of excess bandwidth shared, but as bandwidth to share is
increased, upper-limit service curve will limit it to a constant
limit. Or again, does the algorithm break somehow?

It will still respect the upper-limit curve, but I'm not sure about the consequences for sibling-classes and the parent's active child list, I need to think about this some more. In any case it's not advisable to do so.

And I am even more confused when I think what this means for the
interior classes and their service curves. Apparently the service
curves for parent classes are respected, but does that mean that the
service curve for a parent class would have to be equal to or larger
than the sum of all child classes service curves?  If so, how does one
calculate this? It would be an n-ary curve if calculated exactly. Or
if not so, then what happens if the service curves of child classes
exceed the service curve of the parent class? Obviously here we are
talking only about link-sharing service curves (and upper-limit
service curves) as real-time service curves are always fulfilled and
only handled by leaf classes.

The sum of all realtime service curves must not be bigger than the service curve of the link itself, otherwise the service can't be guaranteed. For link-sharing curves, it's actually not important that they don't exceed their parent because they only define a share, not an absolute amount of service. Only the relative differences between siblings matter.

Adding n curves gives you (in the worst case) a (n+1)-ary curve, you can
calculate it like this:

sc1: m1 = 100kbit, d = 1s, m2 = 200kbit
sc2: m1 = 50kbit, d = 0.25s, m2 = 300kbit
sc3: m1 = 200kbit, d = 1.5s, m2 = 500kbit
-----------------------------------------
m =
{
	350kbit		d <= 0.25s
	600kbit		0.25s < d <= 1s
	700kbit		1s < d <= 1.5s
	1000kbit	d > 1.5s
}

The upper-limit curve is used to limit the link-sharing
curve. Without an upper-limit curve, packets are dequeued at the
speed the underlying device is capable of. For example in the case
of software devices, this is not very desireable, so you can limit
the total output rate.


I came to this conclusion by experimentation. So upper-limit service
curve can be used to shape link-sharing usage - but real-time service
curves are fulfilled regardless of it? So the end result would be that
a class with only a real-time service curve throttles itself to the
rate, with a link-sharing service curve becomes work-conserving and
with an upper-limit service curve it throttles itself again.

Exactly.


For your other questions:
- If you specify only a real-time service curve the class will not
participate in link-sharing. This means it can only send at it's
configured rate. The difference two a link-share+upper-limit curve
is that the service is guaranteed.

- If you specify only a link-share curve their are no deadlines and
no guarantees can be given.


Right. After a while I realized this. But again I am somewhat
uncertain of the ramifications.

If we assume classes that have real-time service curves equal to
link-sharing service curves, and compare those to classes that have
only link-sharing service curves, in an environment where there is
excess bandwidth to share, how will the results be different? If there
is a difference, is it because the link-sharing algorithm might choose
to not fulfill the service curve of a class immediately in the
interest of exact fairness, where the real-time constraint would have
forced to abandon fairness for a moment to fulfill the service curve?

If it is possible to fulfill all demands with the available excess-bandwidth than there is no difference. The real difference is of a different kind. A parents link-sharing service curve might be violated by the real-time criterion used for one of it's children. The parents' siblings will suffer from this as well (link-sharing wise) because they share the same parent and part of the service given to all siblings of this parent has been used in violation of link-sharing, so link-sharing only leaves will suffer. An example for this is in the HFSC paper on page 6.

- The sum of all real-time curves must not exceed 100%.

100% of what? Actual link capacity? And does this mean for any time t in the service curve? How do I confirm that a bunch of service curves with different m1, d and m2 values never end up exceeding 100% anywhere?

Yes, actually link capacity. Already explained above.


And what happens if they do? Obviously there's no bandwidth shared,
but does the algorithm still try to fulfill the requirements as well
as it can, picking the real-time class that is behind the schedule the
most? Or will some class(es) end up dominating the link? Or will the
whole algorithm break down because some variable grows negative or
something?

Nothing bad will happen, only the guarantees can't be met anymore. It will still pick the class with the smallest deadline.


- The actual bandwidth assigned to link-sharing classes doesn't
matter, only the relative difference between sibling-classes is
important


If only relative difference matters, why must link-sharing service
curves be larger than real-time service curves? And smaller than
upper-limit service curves?

They don't. It just makes it simpler to assure that the service given to a class is at least the amount defined by the real-time curve, which is usually what you want.

I also seem to have a lot of difficulties in trying to simulate the
behaviour of the qdisc. I managed to build 'tcsim' with support for
HFSC, but I have been unable to get any reasonable results with it
with any qdisc at all. Even 'examples/sfq' which is very trivial seems
to produce almost random results when I change the 'perturb'
parameter, even though it should hardly make a difference, unless one
is unlucky enough to get two flows hashed in the same bucket, which is
extremely unlikely with only 3 flows. I have attempted to use a couple
other tools as well, and haven't been too successful.

Have you applied the patches from trash.net/~kaber/hfsc/tcsim ? With HZ=1000 and PSCHED_GETTIMEOFDAY as clocksource I got very good results.

Also, something as trivial as this:

tc qdisc add dev $DEV root handle 1: hfsc default 1
tc class add dev $DEV parent 1: classid 1:1 hfsc rt m2 100kbps

seems to work for 'eth0' but not 'lo' interface, where as for example
the 'tbf' qdisc does work for 'lo' as well. If I run those commands on
'lo', every packet shows up as dropped by the qdisc.

Works ok here .. do you mean inside tcsim ?


Regards
Patrick
_______________________________________________
LARTC mailing list / LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux