Re: [PATCH 01/13] PM: Add wake lock api.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 13, 2009 at 5:06 PM, Matthew Garrett <mjg59@xxxxxxxxxxxxx> wrote:
> On Fri, Feb 13, 2009 at 04:50:11PM -0800, Arve Hjønnevåg wrote:
>> On Fri, Feb 13, 2009 at 4:05 PM, Matthew Garrett <mjg59@xxxxxxxxxxxxx> wrote:
>> > The only reason you've given for needing a timeout is that there are
>> > sections of the kernel that don't support wakelocks.
>>
>> Or when not trusting userspace. In the last user space api, I also use
>> a timeout when a process dies with a wakelock locked.
>
> That's policy that's easily to implement in userspace.

How? When the process dies its wakelock is destroyed. With the old
sysfs interface, this was not a problem since the wakelocks were not
cleaned up. If we allow userspace wakelocks to be persistent, then
there is no limit on how many wakelocks userspace can create (which
was one of the objections against the sysfs interface).

>
>> > The only reason
>> > there are sections of the kernel that don't support wakelocks is that
>> > people don't like the API.
>>
>> That is a pretty big leap. There is no wakelock api in the current
>> kernel. I think the absence of an api is a more plausible explanation
>> for it not being used than that people do not like the api.
>
> If an acceptable API gets merged then there's no reason not to quickly
> spread it to the sections of the kernel which would benefit.

It is not trivial to add wakelocks to every section of the kernel that
may need to run.

>> > Well, yeah, that's another pretty solid argument in favor of killing the
>> > freezer...
>>
>> Not freezing threads does not solve the problem. The thread could be
>> waiting for a driver that is suspended before the driver that is
>> preventing suspend.
>
> I think it would be pretty acceptable to schedule a retry for the
> suspend if the count is still zero at that point.

I don't. If a driver returns -EBUSY the system will use 100% cpu until
either the driver stops returning -EBUSY, or someone else locks a
wakelock.

>
>> > Mm? I was thinking that in the debug case you'd replace the counter with
>> > a list containing the device structs, then simply dump the device name
>> > to your debug output.
>>
>> We always use the debug case. I don't think a list of device struct is
>> better than a list of wakelocks.
>
> You may. Others will not. There's benefit in simplifying the non-debug
> case.

A counter is not significantly simpler than a list if you remove all
the debug and timeout support:
lock:
  list_add
unlock:
  list_del
  if list_empty
    schedule suspend

>> > If you're passing a device struct then I think it implies that that
>> > device is no linger inhibiting suspend.
>>
>> How is passing the device struct to the api better than passing a
>> wake_lock struct?
>
> It's not hugely, but it seems like a neater description of what's going
> on. I don't really object to the use of a struct wait_lock - I think
> it's more complicated than necessary for the vast majority of cases, but
> that's not a significant issue.

You think the api is more complicated than necessary, or the
implementation? A lot of the complexity in the wakelock implementation
removes complexity from the drivers.

> I think the name doesn't do a good job
> of indicating what's going on, and I think the timeout aspect is a
> misfeature.

I understand your objection to using timeouts, but removing the use of
timeouts is not currently an option. If you accept that timeouts will
be used, supporting them in the wakelock code simplifies the drivers
and reduces overhead.

-- 
Arve Hjønnevåg
_______________________________________________
linux-pm mailing list
linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/linux-pm


[Index of Archives]     [Linux ACPI]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [CPU Freq]     [Kernel Newbies]     [Fedora Kernel]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux