Patrick McHardy wrote: > The combinations you list are correct. Real-time curves are only > valid for leaf-classes, whereas link-sharing and upper-limit curves > are valid for all classes in the heirarchy. Right, after a bit of experimentation and thinking, I realized this. > When multiple curves are used, the following must hold: > rt <= ls <= ul If this is for all t, in practise this means: if d > 0 m1(rt) <= m1(ls) <= m1(ul) m2(rt) <= m2(ls) <= m2(ul) if m1 < m2 d(rt) >= d(ls) >= d(ul) elsif m1 > m2 d(rt) <= d(ls> <= d(ul) else d irrelevant else m1 irrelevant m2(rt) <= m2(ls) <= m2(ul) Am I correct? What happens if these values are violated? Are any errors signalled? Also, I have very little clue why these must hold as such. Obviously if a link-sharing curve is smaller than the real-time curve, then when the class participates in link-sharing, it would have to have sent less than it already has. But if this is so, does the algorithm break totally, or does that only mean that the class does not participate in link-sharing before the excess bandwidth share for the class based on the relative link-sharing service curve goes above the real-time service curve? The latter would not necessarily an unwanted behaviour. Then if the upper-limit service curve is smaller than the link-sharing curve, what would this cause? Naive assumptions would lead me to think that it would merely mean that the class participates in link-sharing based on the relative service curve it has, but never ends up taking more than what the upper-limit service curve dictates. Eg. in a case with a relatively large link-sharing service curve and a smaller upper-limit service curve the class would get a big share out of a small amount of excess bandwidth shared, but as bandwidth to share is increased, upper-limit service curve will limit it to a constant limit. Or again, does the algorithm break somehow? And I am even more confused when I think what this means for the interior classes and their service curves. Apparently the service curves for parent classes are respected, but does that mean that the service curve for a parent class would have to be equal to or larger than the sum of all child classes service curves? If so, how does one calculate this? It would be an n-ary curve if calculated exactly. Or if not so, then what happens if the service curves of child classes exceed the service curve of the parent class? Obviously here we are talking only about link-sharing service curves (and upper-limit service curves) as real-time service curves are always fulfilled and only handled by leaf classes. > To understand why there are two (forgetting about upper-limit curves > for now) different curves your need to know that scheduling in HFSC > is based on two criteria: the real-time criterion which ensures that > the guarantees of leaf-classes are met and the link-sharing > criterion which tries to satisfy the service curves of intermediate > classes and distributes excess bandwidth fairly. The reason why > there are two different criteria is that in the Fair Service > Link-sharing model that is approximated by HFSC it is not always > possible to guarantee the service of all classes simultaneously at > all times (with non-linear service curves). HFSC chooses to > guarantee the service curves of (real-time) leaf-classes (because > only leaves carry packets), and uses the link-sharing criterion to > minimize the discrepancy between the actual service received and the > service defined by the Fair Service Link-sharing Model. Right. I think I understand this, but I am not so certain of the implications for actual use. > The upper-limit curve is used to limit the link-sharing > curve. Without an upper-limit curve, packets are dequeued at the > speed the underlying device is capable of. For example in the case > of software devices, this is not very desireable, so you can limit > the total output rate. I came to this conclusion by experimentation. So upper-limit service curve can be used to shape link-sharing usage - but real-time service curves are fulfilled regardless of it? So the end result would be that a class with only a real-time service curve throttles itself to the rate, with a link-sharing service curve becomes work-conserving and with an upper-limit service curve it throttles itself again. > For your other questions: > - If you specify only a real-time service curve the class will not > participate in link-sharing. This means it can only send at it's > configured rate. The difference two a link-share+upper-limit curve > is that the service is guaranteed. > > - If you specify only a link-share curve their are no deadlines and > no guarantees can be given. Right. After a while I realized this. But again I am somewhat uncertain of the ramifications. If we assume classes that have real-time service curves equal to link-sharing service curves, and compare those to classes that have only link-sharing service curves, in an environment where there is excess bandwidth to share, how will the results be different? If there is a difference, is it because the link-sharing algorithm might choose to not fulfill the service curve of a class immediately in the interest of exact fairness, where the real-time constraint would have forced to abandon fairness for a moment to fulfill the service curve? I am really bad at explaining what I mean in these things, which just shows my uncertainty in the subject. > Some other notes you might find helpful: > - The sum of all real-time curves must not exceed 100%. 100% of what? Actual link capacity? And does this mean for any time t in the service curve? How do I confirm that a bunch of service curves with different m1, d and m2 values never end up exceeding 100% anywhere? And what happens if they do? Obviously there's no bandwidth shared, but does the algorithm still try to fulfill the requirements as well as it can, picking the real-time class that is behind the schedule the most? Or will some class(es) end up dominating the link? Or will the whole algorithm break down because some variable grows negative or something? > - The actual bandwidth assigned to link-sharing classes doesn't > matter, only the relative difference between sibling-classes is > important If only relative difference matters, why must link-sharing service curves be larger than real-time service curves? And smaller than upper-limit service curves? > Great! I started some documentation last year, but never got to > finishing it. I can send it to you in private, but I don't want > to publish it as long as it's unfinished. Large parts of the HFSC algorithm seem to be still unclear to me, especially in all the edge cases - the common case seems rather straightforward and I have even gotten it to work, at times. I also seem to have a lot of difficulties in trying to simulate the behaviour of the qdisc. I managed to build 'tcsim' with support for HFSC, but I have been unable to get any reasonable results with it with any qdisc at all. Even 'examples/sfq' which is very trivial seems to produce almost random results when I change the 'perturb' parameter, even though it should hardly make a difference, unless one is unlucky enough to get two flows hashed in the same bucket, which is extremely unlikely with only 3 flows. I have attempted to use a couple other tools as well, and haven't been too successful. Also, something as trivial as this: tc qdisc add dev $DEV root handle 1: hfsc default 1 tc class add dev $DEV parent 1: classid 1:1 hfsc rt m2 100kbps seems to work for 'eth0' but not 'lo' interface, where as for example the 'tbf' qdisc does work for 'lo' as well. If I run those commands on 'lo', every packet shows up as dropped by the qdisc. In any case, I am not going to write anything about HFSC before I can get even the basics cleared up to myself :-) Thanks for sorting this thing out. -- Naked _______________________________________________ LARTC mailing list / LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/