Re: [PATCH 0/8] Suspend block api (version 8)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2010-05-28 at 15:20 +0200, Peter Zijlstra wrote:
> On Fri, 2010-05-28 at 14:02 +0100, Alan Cox wrote:
> > On Fri, 28 May 2010 14:30:36 +0200
> > Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> > 
> > > On Fri, 2010-05-28 at 13:21 +0100, Alan Cox wrote:
> > > > [Total kernel changes
> > > > 
> > > >         Ability to mark/unmark a scheduler control group as outside of
> > > >         some parts of idle consideration. Generically useful and
> > > >         localised. Group latency will do most jobs fine (Zygo is correct
> > > >         it can't solve his backup case elegantly I think)
> > > > 
> > > >         Test in the idling logic to distinguish the case and only needed
> > > >         for a single Android specific power module. Generically useful
> > > >         and localised] 
> > > 
> > > I really don't like this..
> > > 
> > > Why can't we go with the previously suggested: make bad apps block on
> > > QoS resources or send SIGXCPU, SIGSTOP, SIGTERM and eventually SIGKILL
> > 
> > Ok. Are you happy with the QoS being attached to a scheduler control
> > group and the use of them to figure out what is what ?
> 
> Up to a point, but explicitly not running runnable tasks complicates the
> task model significantly, and interacts with fun stuff like bandwidth
> inheritance and priority/deadline inheritance like things -- a subject
> you really don't want to complicate further.
> 
> We really want to do our utmost best to make applications block on
> something without altering our task model.
> 
> If applications keep running despite being told repeatedly to cease, I
> think the SIGKILL option is a sane one (they got SIGXCPU, SIGSTOP and
> SIGTERM before that) and got ample opportunity to block on something.
> 
> Traditional cpu resource management treats the CPU as an ever
> replenished resource, breaking that assumption (not running runnable
> tasks) puts us on very shaky ground indeed.

Also, I'm not quite sure why we would need cgroups to pull this off.

It seems most of the problems the suspend-blockers are trying to solve
are due to the fact of not running runnable tasks. Not running runnable
tasks can be seen as assigning tasks 0 bandwidth. Which is a situation
extremely prone to all things inversion. Such a situation would require
bandwidth inheritance to function at all, so possibly we can see
suspend-blockers as a misguided implementation of that.

So lets look at the problem, we want to be frugal with power, this means
that the system as a whole should strive to do nothing. And we want to
enforce this as strict as possible.

If we look at the windowing thing, lets call it X, X will inform its
clients about the visibility of their window, any client trying to draw
to its window when it has been informed about it not being visible is
wasting energy and should be punished.

(I really wish the actual X on my desktop would do more of that -- its
utterly rediculous that firefox keeps animating banners and the like
when nobody can possibly see them)

Clearly when we turn the screen off, nothing is visible and all clients
should cease to draw.

How do we want to punish dis-obedient clients? Is blocking them
sufficient? Do we want to maintain a shitlist of iffy clients?

Note that the 'buggy' client doesn't function properly, if we block its
main event loop doing this, it won't respond to other events -- but as
argued, its a buggy app, hence its per definition unreliable and we
don't care.

Next comes the interesting problem of who gets to keep the screen lit, I
think in the above case that is a pure userspace problem and doesn't
need kernel intervention.

Can we apply the same reasoning to other resources, filesystems,
network? For both of them it seems the main governing body isn't this
windowing system, but the kernel (although arguably you could fully do
it in middle-ware, just like X is that).

But in both cases I think we can work with a QoS system where we assign
some service-level to each task, and a server-level (with inverse
priority scales to the task level), and only provide service when
task-level >= server-level. [server-level 0 would serve everybody,
task-level 0 would only get service when everybody does]

If we allow userspace to set server-levels, we need to ensure that
whoever is allowed that is a well functioning program.

We can do a similar thing for wakeups, aside from setting wakeup slack,
we can also set a wakeup limit per task, but I'm not quite sure how that
would work out. That needs more thought. But an application exceeding
its wakeup limit would still need to be woken (not doing so leads to fun
problems) but the event is clearly attributable and loggable.

_______________________________________________
linux-pm mailing list
linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/linux-pm


[Index of Archives]     [Linux ACPI]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [CPU Freq]     [Kernel Newbies]     [Fedora Kernel]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux