Re: [RFC][PATCH] PM: Avoid losing wakeup events during suspend

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




--- On Sun, 6/20/10, David Brownell <david-b@xxxxxxxxxxx> wrote:

... in a sort of "aren't we asking the
wrong questions??" manner ... 


I suspect that
looking at the problem in terms of how to
coordinate subsystems (an abstraction which
is at best very ad-hoc today!) we would
end up with a cleaner model, which doesn't
bother so many folk the ay wakelocks or
even suspend blockers seem to bother them...


> From: David Brownell <david-b@xxxxxxxxxxx>
> Subject: Re:  [RFC][PATCH] PM: Avoid losing wakeup events during suspend
> To: markgross@xxxxxxxxxxx, "Alan Stern" <stern@xxxxxxxxxxxxxxxxxxx>
> Cc: "Neil Brown" <neilb@xxxxxxx>, linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx, "Dmitry Torokhov" <dmitry.torokhov@xxxxxxxxx>, "Linux Kernel Mailing List" <linux-kernel@xxxxxxxxxxxxxxx>, "mark gross" <640e9920@xxxxxxxxx>
> Date: Sunday, June 20, 2010, 9:04 PM
> 
> > > > Indeed, the same problem arises if the
> event
> > isn't delivered to
> > > > userspace until after userspace is frozen.
> 
> Can we put this more directly:  the problem is
> that the *SYSTEM ISN'T FULLY SUSPENDED* when the
> hardware wake event triggers?  (Where "*SYSTEM*
> includes userspace not just kernel.  In fact the
> overall system is built from many subsystems,
> some in the kernel and some in userspace.
> 
> At the risk of being prematurely general:  I'd
> point out that these subsystems probably have
> sequencing requirements.  kernel-then-user is
> a degenerate case, and surely oversimplified.
> There are other examples, e.g. between kernel
> subsystems...  Like needing to suspend a PMIC
> before the bus it uses, where that bus uses
> a task to manage request/response protocols.
> (Think I2C or SPI.)
> 
> This is like the __init/__exit sequencing mess...
> 
> In terms of userspace event delivery, I'd say
> it's a bug in the event mechanism if taking the
> next step in suspension drops any event.  It
> should be queued, not lost...  As a rule the
> hardware queuing works (transparently)...
> 
> > Of course, the underlying
> > > > issue here is that the kernel has no direct
> way
> > to know when userspace
> > > > has finished processing an event.
> 
> 
> Again said more directly:  there's no current
> mechanism to coordinate subsystems.  Userspace
> can't communicate "I'm ready" to kernel, and
> vice versa.  (a few decades ago, APM could do
> that ... we dropped such mechanisms though, and
> I'm fairly sure APM's implementation was holey.)
> 
> 
> 
> 
> _______________________________________________
> linux-pm mailing list
> linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx
> https://lists.linux-foundation.org/mailman/listinfo/linux-pm
> 

_______________________________________________
linux-pm mailing list
linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/linux-pm



[Index of Archives]     [Linux ACPI]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [CPU Freq]     [Kernel Newbies]     [Fedora Kernel]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux