On Wed, Apr 15, 2009 at 06:45:00PM -0700, Arve Hjønnevåg wrote: > 2009/4/15 mark gross <mgross@xxxxxxxxxxxxxxx>: > > I'm sure you are sick of this question, but couldn't you do this same > > functionality (constraining entry to Suspend) using existing > > infrastructure. You are clearly aggregating a block-count and triggering > > on an zero aggregate value to enter Suspend. > > The only reason I have a block count, is that the feedback on this > list was that it is too expensive to use a spin-lock every time we > want to block or unblock suspend. The later patches that enable stats > and timeout support uses inactive and active lists instead. > > > > > I'm thinking you could do something like CPUIDLE to push the implicit > > suspend call and use a new pmqos parameter to keep the system alive. > > We'd have to look at the API to make sure it would work in interrupt > > mode. > > > > The only thing I got from the earlier threads asking why not PMQOS, was > > some complaint about the strcmp the constraint aggregation does WRT > > performance. I don't think it would be too hard to address the perceived > > performance issue (even if its yet to be proven as an issue anywhere) > > There was a question about why we did not use pmqos instead othe the > idle wake locks. One reason was that we mainly needed to prevent a > specific state to not loose interrupts, not because the latency was > too high. The other reason was that the pm qos interface uses strings > instead of handles. Using strings to search to a list for the item you > want to update has an impact on performance that is not hard to > measure, but it also require the clients to create unique strings. really? How many times / second was your system changing the pmqos parameter? If that turns out to be realistic, non-abusive use of the api then perhaps I need to migrate pmqos to include / use handles. when pmqos was designed we assumed it wouldn't be hit with high parameter change rates. What could be a realistic upper bound on change rate for the parameters? > > > FWIW there is some talk of re-factoring some of pmqos to sit on top of a > > constraint framework (to generalize the code a little and support > > constraining things that are not really measurable in terms of latencies > > or throughputs) > > > > That should help with replacing the idle-wake-locks. > > > anyway a few comments below. > > > > On Tue, Apr 14, 2009 at 06:41:25PM -0700, Arve Hjønnevåg wrote: > >> Adds /sys/power/request_state, a non-blocking interface that specifies > >> which suspend state to enter when no suspend blockers are active. A > >> special state, "on", stops the process by activating the "main" suspend > >> blocker. > >> > >> Signed-off-by: Arve Hjønnevåg <arve@xxxxxxxxxxx> > >> --- > >> Documentation/power/suspend-blockers.txt | 76 +++++++++ > >> include/linux/suspend_block.h | 61 +++++++ > >> kernel/power/Kconfig | 10 ++ > >> kernel/power/Makefile | 1 + > >> kernel/power/main.c | 62 +++++++ > >> kernel/power/power.h | 6 + > >> kernel/power/suspend_block.c | 257 ++++++++++++++++++++++++++++++ > >> 7 files changed, 473 insertions(+), 0 deletions(-) > >> create mode 100644 Documentation/power/suspend-blockers.txt > >> create mode 100755 include/linux/suspend_block.h > >> create mode 100644 kernel/power/suspend_block.c > >> > >> diff --git a/Documentation/power/suspend-blockers.txt b/Documentation/power/suspend-blockers.txt > >> new file mode 100644 > >> index 0000000..743b870 > >> --- /dev/null > >> +++ b/Documentation/power/suspend-blockers.txt > >> @@ -0,0 +1,76 @@ > >> +Suspend blockers > >> +================ > >> + > >> +A suspend_blocker prevents the system from entering suspend. > > > > And trigger implicit entry into suspend when last blocker is released. > > > >> + > >> +If the suspend operation has already started when calling suspend_block on a > >> +suspend_blocker, it will abort the suspend operation as long it has not already > >> +reached the sysdev suspend stage. This means that calling suspend_block from an > >> +interrupt handler or a freezeable thread always works, but if you call > >> +block_suspend from a sysdev suspend handler you must also return an error from > >> +that handler to abort suspend. > >> + > >> +Suspend blockers can be used to allow user-space to decide which keys should > >> +wake the full system up and turn the screen on. > > > > Provided the user mode process controlling the screen power is in the calling > > path of user-space input-event thread. > > What you mean by this? Our user-space input event thread does not turn > the screen on, but translates the events and queues the translated > event on another queue. As long as all queues are protected by a > suspend blocker, there is no requirement about which thread turns the > screen on. I mean that to enable the promised partial system wake up ( "Suspend blockers can be used to allow user-space to decide which keys should wake the full system up and turn the screen on.") there is more to the story as this API is about system level pm not device PM needed to keep the screen off while partially waking things up before suspending again. Perhaps an explanation on how suspend blocker can be used to partially wake up a system without the screen coming on would help me. Right now I'm not seeing how that can be implemented in the context of this patch. > > > > Basically it should be made clear that this is not a device PM > > implementation. Its a global system suspend constraint system, where > > the count of blockers is aggregated in a static variable, > > suspend_block_count, within suspend_block.c with implicit suspend > > called when block count gets to zero. > > > > i.e. only useful in platform specific code for platforms that can do > > ~10ms suspends and resumes. > > I disagree. If I have a desktop system that takes 5 seconds to wake up > from sleep, I still want the ability to auto suspend after user > inactivity, and to prevent suspend while some tasks are running. I guess we have this now, but the user mode implementations don't scale to low latency and frequent suspend applications. --mgross > > > >> Use set_irq_wake or a platform > >> +specific api to make sure the keypad interrupt wakes up the cpu. Once the keypad > >> +driver has resumed, the sequence of events can look like this: > >> +- The Keypad driver gets an interrupt. It then calls suspend_block on the > >> + keypad-scan suspend_blocker and starts scanning the keypad matrix. > > count = 1 > >> +- The keypad-scan code detects a key change and reports it to the input-event > >> + driver. > >> +- The input-event driver sees the key change, enqueues an event, and calls > >> + suspend_block on the input-event-queue suspend_blocker. > > count = 2 > >> +- The keypad-scan code detects that no keys are held and calls suspend_unblock > >> + on the keypad-scan suspend_blocker. > > count = 1 > >> +- The user-space input-event thread returns from select/poll, calls > >> + suspend_block on the process-input-events suspend_blocker and then calls read > >> + on the input-event device. > > count = 2 > >> +- The input-event driver dequeues the key-event and, since the queue is now > >> + empty, it calls suspend_unblock on the input-event-queue suspend_blocker. > > count = 1 > >> +- The user-space input-event thread returns from read. It determines that the > >> + key should not wake up the full system, calls suspend_unblock on the > >> + process-input-events suspend_blocker and calls select or poll. > > count = 0 + race between dropping into Suspend state and the execution > > of the select or poll. (should probably be ok) > > This is not a race condition, select or poll does not affect the > suspend blocker in the input driver to it does not matter if the poll > is called before or after suspend. > > > > >> + > >> + Key pressed Key released > >> + | | > >> +keypad-scan ++++++++++++++++++ > >> +input-event-queue +++ +++ > >> +process-input-events +++ +++ > >> + > >> + > > >> +struct suspend_blocker { > >> +#ifdef CONFIG_SUSPEND_BLOCK > >> + atomic_t flags; > >> + const char *name; > >> +#endif > >> +}; > >> + > ... > >> + > >> +#define SB_INITIALIZED (1U << 8) > >> +#define SB_ACTIVE (1U << 9) > >> + > >> +static DEFINE_SPINLOCK(state_lock); > >> +static atomic_t suspend_block_count; > > > > So if suspend_block_count is the lynch pin of the design, why have all > > the suspend_blocker structs? Why have all the logic around the > > different suspend blocker instances? > > The suspend block structs need to be part of the api to allow stats > and/or debugging. It also ensures that a single suspend_blocker does > not change the global count by more than one. Also, > suspend_block_count is not part of the design, it is an optimization > that becomes useless when you enable stats. > > > > > It seems like a lot of extra effort for an atomic "stay awake" counter. > > > > > -- > Arve Hjønnevåg _______________________________________________ linux-pm mailing list linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/linux-pm