On Tue, 2010-12-14 at 16:56 +0530, Srivatsa Vaddagiri wrote: > On Tue, Dec 14, 2010 at 12:03:58PM +0100, Mike Galbraith wrote: > > On Tue, 2010-12-14 at 15:54 +0530, Srivatsa Vaddagiri wrote: > > > On Tue, Dec 14, 2010 at 07:08:16AM +0100, Mike Galbraith wrote: > > > > > > That part looks ok, except for the yield cross cpu bit. Trying to yield > > > > a resource you don't have doesn't make much sense to me. > > > > > > So another (crazy) idea is to move the "yieldee" task on another cpu over to > > > yielding task's cpu, let it run till the end of yielding tasks slice and then > > > let it go back to the original cpu at the same vruntime position! > > > > Yeah, pulling the intended recipient makes fine sense. If he doesn't > > preempt you, you can try to swap vruntimes or whatever makes arithmetic > > sense and will help. Dunno how you tell him how long he can keep the > > cpu though, > > can't we adjust the new task's [prev_]sum_exec_runtime a bit so that it is > preempted at the end of yielding task's timeslice? And dork up accounting. Why? Besides, it won't work because you have no idea who may preempt whom, when, and for how long. (Why do people keep talking about timeslice? The only thing that exists is lag that changes the instant anyone does anything of interest.) > > and him somehow going back home needs to be a plain old > > migration, no fancy restoration of ancient history vruntime. > > What is the issue if it gets queued at the old vruntime (assuming fair stick is > still behind that)? Without that it will hurt fairness for the yieldee (and > perhaps of the overall VM in this case). Who all are you placing this task in front of or behind based upon a non-existent relationship? Your recipient may well have been preempted, and is now further behind than the completely irrelevant to the current situation stored vruntime would indicate, so why would you want to move it rightward? Certainly not in the interest of fairness. -Mike -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html