On 12/19/2010 12:05 PM, Mike Galbraith wrote:
On Sun, 2010-12-19 at 08:21 +0200, Avi Kivity wrote: > On 12/18/2010 09:06 PM, Mike Galbraith wrote: > > Hm, so it needs to be very cheap, and highly repeatable. > > > > What if: so you're trying to get spinners out of the way right? You > > somehow know they're spinning, so instead of trying to boost some task, > > can you do a directed yield in terms of directing a spinner that you > > have the right to diddle to yield. Drop his lag, and resched him. He's > > not accomplishing anything anyway. > > There are a couple of problems with this approach: > > - current yield() is a no-op That's why you'd drop lag, set to max(se->vruntime, cfs_rq->min_vruntime).
Internal scheduler terminology again, don't follow.
> - even if it weren't, the process (containing the spinner and the > lock-holder) would yield as a whole. I don't get this part. How does the whole process yield if one thread yields?
The process is the sum of its threads. If a thread yield loses 1 msec of runtime due to the yield, the process loses 1 msec due to the yield. If the lock is held for, say, 100 usec, it would be better for the process to spin rather than yield.
With directed yield the process loses nothing by yielding to one of its threads.
> If it yielded for exactly the time > needed (until the lock holder releases the lock), it wouldn't matter, > since the spinner isn't accomplishing anything, but we don't know what > the exact time is. So we want to preserve our entitlement. And that's the hard part. If can drop lag, you may hurt yourself, but at least only yourself.
We already have a "hurt only yourself" thing. We sleep for 100 usec when we detect spinning. It's awful.
> With a pure yield implementation the process would get less than its > fair share, even discounting spin time, which we'd be happy to donate to > the rest of the system.
We aren't happy to donate it to the rest of the system, since it will cause a guest with lots of internal contention to make very little forward progress.
> > > If the only thing running is virtualization, and nobody else can use the > > interface being invented, all is fair, but this passing of vruntime > > around is problematic when innocent bystanders may want to play too. > > We definitely want to maintain fairness. Both with a dedicated virt > host and with a mixed workload. That makes it difficult to the point of impossible. You want a specific task to run NOW for good reasons, but any number of tasks may want the same godlike power for equally good reasons.
I don't want it to run now. I want it to run before some other task. I don't care if N other tasks run before both. So no godlike powers needed, simply a courteous "after you".
You could create a force select which only godly tasks could use that didn't try to play games with vruntimes, just let the bugger run, and let him also eat the latency hit he'll pay for that extra bit of cpu IFF you didn't care about being able to mix loads. Or, you could just bump his nice level with an automated return to previous level on resched. Any intervention has unavoidable consequences for all comers though.
Since task A is running now, clearly the scheduler thinks it deserves to run. What I want to do is take just enough of the "deserves" part to make it not run any more, and move it to task B.
> > > > Yep, so much for accounting. > > What's the problem exactly? What's the difference, system-wide, with > the donor continuing to run for that same entitlement? Other tasks see > the same thing. SOME tasks receive gifts from the void. The difference is the bias.
Isn't fork() a gift from the void?
> > > > Where did the entitlement come from if task A running alone on cpu A > > > > tosses some entitlement over the fence to his pal task B on cpu B.. and > > > > keeps on trucking on cpu A? Where does that leave task C, B's > > > > competition? > > > > > > Eventually C would replace A, since its share will be exhausted. If C > > > is pinned... good question. How does fairness work with pinned tasks? > > > > In the case I described, C had it's pocket picked by A. > > Would that happen if global fairness was maintained? What's that? :)
If you run three tasks on a two cpu box, each gets 2/3 of a cpu.
No task may run until there are enough of you to fill the box?
Why is that a consequence of global fairness? three tasks get 100% cpu on a 4-cpu box, the fourth cpu idles. Is that not fair for some reason?
God help you when somebody else wakes up Mr. Early-bird? ...
What?
> > I guess random perturbations cause task migrations periodically and > things balance out. But it seems wierd to have this devotion to > fairness on a single cpu and completely ignore fairness on a macro level. It doesn't ignore it complete, it just doesn't try to do all the math continuously (danger Will Robinson: Peter has scary patches). Prodding it in the right general direction with migrations is cheaper.
Doesn't seem to work from my brief experiment. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html