Re: [PATCH 01/13] PM: Add wake lock api.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 13, 2009 at 5:49 PM, Matthew Garrett <mjg59@xxxxxxxxxxxxx> wrote:
> On Fri, Feb 13, 2009 at 05:33:35PM -0800, Arve Hjønnevåg wrote:
>> On Fri, Feb 13, 2009 at 5:06 PM, Matthew Garrett <mjg59@xxxxxxxxxxxxx> wrote:
>> > On Fri, Feb 13, 2009 at 04:50:11PM -0800, Arve Hjønnevåg wrote:
>> >> Or when not trusting userspace. In the last user space api, I also use
>> >> a timeout when a process dies with a wakelock locked.
>> >
>> > That's policy that's easily to implement in userspace.
>>
>> How? When the process dies its wakelock is destroyed. With the old
>> sysfs interface, this was not a problem since the wakelocks were not
>> cleaned up. If we allow userspace wakelocks to be persistent, then
>> there is no limit on how many wakelocks userspace can create (which
>> was one of the objections against the sysfs interface).
>
> Like I suggested before, just multiplex them through a single daemon in
> userspace. That lets you tie your policy into your platform specifics.
> You get to do things like keep the lock until whatever process restarts
> dead system components indicates that your input process is running
> again, for instance.

OK, so you want a single daemon in userspace (init?) to handle process
restarting and wakelocks.

>> >> That is a pretty big leap. There is no wakelock api in the current
>> >> kernel. I think the absence of an api is a more plausible explanation
>> >> for it not being used than that people do not like the api.
>> >
>> > If an acceptable API gets merged then there's no reason not to quickly
>> > spread it to the sections of the kernel which would benefit.
>>
>> It is not trivial to add wakelocks to every section of the kernel that
>> may need to run.
>
> Doing things right is often harder, yes :)
>
>> > I think it would be pretty acceptable to schedule a retry for the
>> > suspend if the count is still zero at that point.
>>
>> I don't. If a driver returns -EBUSY the system will use 100% cpu until
>> either the driver stops returning -EBUSY, or someone else locks a
>> wakelock.
>
> The retry doesn't have to be immediate. Yes, this is something of a
> hypocritical argument given everything else I've had to say about
> timeouts, but working out why a driver's preventing a suspend is
> probably a Hard Problem. I don't think this single case is enough to add
> it to the entire API.

It is not a single case. Having wakelocks with timeout support makes
it trivial to work around the problem.

>
>> >> We always use the debug case. I don't think a list of device struct is
>> >> better than a list of wakelocks.
>> >
>> > You may. Others will not. There's benefit in simplifying the non-debug
>> > case.
>>
>> A counter is not significantly simpler than a list if you remove all
>> the debug and timeout support:
>> lock:
>>   list_add
>> unlock:
>>   list_del
>>   if list_empty
>>     schedule suspend
>
> Remember that for things like IDE we probably need to have locks in the
> fast path. An atomic_inc is a lot cheaper than a spinlock protected
> list_add.

The slow path for spinlocks and atomic operations are about the same.
On an smp arm v6 we have:

static inline void __raw_spin_lock(raw_spinlock_t *lock)
{
	unsigned long tmp;

	__asm__ __volatile__(
"1:	ldrex	%0, [%1]\n"
"	teq	%0, #0\n"
#ifdef CONFIG_CPU_32v6K
"	wfene\n"
#endif
"	strexeq	%0, %2, [%1]\n"
"	teqeq	%0, #0\n"
"	bne	1b"
	: "=&r" (tmp)
	: "r" (&lock->lock), "r" (1)
	: "cc");

	smp_mb();
}


and:

static inline int atomic_add_return(int i, atomic_t *v)
{
	unsigned long tmp;
	int result;

	__asm__ __volatile__("@ atomic_add_return\n"
"1:	ldrex	%0, [%2]\n"
"	add	%0, %0, %3\n"
"	strex	%1, %0, [%2]\n"
"	teq	%1, #0\n"
"	bne	1b"
	: "=&r" (result), "=&r" (tmp)
	: "r" (&v->counter), "Ir" (i)
	: "cc");

	return result;
}



>
>> > I think the name doesn't do a good job
>> > of indicating what's going on, and I think the timeout aspect is a
>> > misfeature.
>>
>> I understand your objection to using timeouts, but removing the use of
>> timeouts is not currently an option. If you accept that timeouts will
>> be used, supporting them in the wakelock code simplifies the drivers
>> and reduces overhead.
>
> Why is it not an option?

I think we have covered this.

-- 
Arve Hjønnevåg
_______________________________________________
linux-pm mailing list
linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/linux-pm


[Index of Archives]     [Linux ACPI]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [CPU Freq]     [Kernel Newbies]     [Fedora Kernel]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux