On Friday 28 August 2009, Alan Stern wrote: > On Wed, 26 Aug 2009, Rafael J. Wysocki wrote: > > > From: Rafael J. Wysocki <rjw@xxxxxxx> > > > > Theoretically, the total time of system sleep transitions (suspend > > to RAM, hibernation) can be reduced by running suspend and resume > > callbacks of device drivers in parallel with each other. However, > > there are dependencies between devices such that, for example, we may > > not be allowed to put one device into a low power state before > > anohter one has been suspended (e.g. we cannot suspend a bridge > > before suspending all devices behind it). In particular, we're not > > allowed to suspend the parent of a device before suspending the > > device itself. Analogously, we're not allowed to resume a device > > before resuming its parent. > > > In this version of the patch the async threads started to execute > > the resume callbacks of specific device don't exit immediately having > > done that, but search dpm_list for devices whose PM dependencies have > > already been satisfied and execute their callbacks without waiting. > > Given this design, why bother to invoke device_resume() for the async > devices? Why not just start up a bunch of async threads, each of which > calls async_resume() repeatedly until everything is finished? (And > rearrange async_resume() to scan the list first and do the actual > resume second.) > > The same goes for the noirq versions. I thought about that, but there are a few things to figure out: - how many threads to start - when to start them - stop condition I had a few ideas, but then I thought it would be simpler to start an async thread when we know there's some async work to do (ie. there's an async device to handle) and make each async thread browse the list just once (the rationale is that we've just handled a device, so there's a chance there are some devices with satisfied dependencies down the list). Thanks, Rafael -- To unsubscribe from this list: send the line "unsubscribe linux-acpi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html