But I find it awkward, for our purposes, the way rate and ceil seem to
work. If we theoretically have 200 clients, and all 200 are on, then
we can set rate for 1/200 of the total ... in our case only about 10
kilobits per user. They CAN borrow up to ceil, yes. But since we are
REALLY busy, we can't really do much to give one client more than
another.
Hey, I just read the docs, I don't have any experience of using this. BUT, there appears to be a priority option (it's not called priority, but I don't have the docs in front of me) on the HTB qdisc which seems to let you bias the squabble towards certain users and away from others. I vaguely recall that it might be that the excess gets divided as a ratio of the priorities?
However, if we KNOW only 50 are on right now, then we can give each of those a rate of 1/50, guaranteed. We can set ceil a little differently. And if we have bandwidth hogs, we can adjust things so it is fairer for all.
Suppose we have invited 200 guests for dessert. So we make 200 tarts to be sure everyone gets one. But only 50 guests show up. So they eat their own tart, and then fight over the other 150 tarts.
What if you could look at the total number of guests before they enter
the dining room, make a quick assessment, and put two or three tarts
on each plate instead of one. They still might squabble a bit over
the remaining tarts, but not as much of a free-for-all as before.
Hmm, perhaps I'm being stupid, but those two situations seem to be identical to me? I can't see the difference between 1 "tart", and a 1/50 share of the remaining 150 tarts, or starting with 1/50 of 200 tarts...?
Also, you appear to be more interested in priortising based on the *type* of traffic. So that appears to be more logically at the top of your tree. Its only like this that you can limit the P2P people I think. Otherwise all you can say is that each P2P user can use their full allocation of bandwidth to download junk. However, if you put P2P at the top and limit it to 20% of total, then each user will be able to use *up to* their full quota, but if there are lots of P2P users, then each may not even be able to reach that level, and so will find they are throttled back below even their personal download quota. This appears to be what you are after?
This is a really interesting idea ... if we can keep our accounting
separate then it WOULD open up more possibilities. Maybe someone can
point me in the direction of IP traffic accounting that we could use
separate from the tc.
I thought that there were accounting logs on all iptables chains. There is also an accounting option in iptables I seem to remember (check man pages). Perhaps there are other options involving a user space queue which doesn't actually queue, but counts stuff... I'm sure there are other options as well
For example, you could setup the rules with the class of traffic at the top and the users underneath, eg top level is "P2P", "bulk", "interactive", and under each is a user class (or SFQ, etc). Now you could limit the P2P to be only 1/3 of total bandwidth, which is then fought over by the users (each will get a fraction depending on how many others are using it, not how many classes you have). Perhaps for the other classes even SFQ will be enough?
I think I can see that ... if we don't have to put each customer into
one (or two or three) different buckets. If we could even put groups
of customers into buckets together, that might be good. But again, it
seems like it would be handy to be able to dynamically change bucket
sizes and whose packets go through them.
I think this method will make that easier? Remember that each customer will turn up multiple times now, ie for each type of traffic. You can more easily tune the buckets determining the amount of each sort of traffic, eg turning up P2P in the evening, and down in the day simply by adjusting the single top bucket (not each individual one).
Good luck
Ed W _______________________________________________ LARTC mailing list / LARTC@xxxxxxxxxxxxxxx http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/