Re: [PATCH 2/6] PM: Asynchronous resume of devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Friday 28 August 2009, Alan Stern wrote:
> On Fri, 28 Aug 2009, Rafael J. Wysocki wrote:
> 
> > > Given this design, why bother to invoke device_resume() for the async 
> > > devices?  Why not just start up a bunch of async threads, each of which 
> > > calls async_resume() repeatedly until everything is finished?  (And 
> > > rearrange async_resume() to scan the list first and do the actual 
> > > resume second.)
> > > 
> > > The same goes for the noirq versions.
> > 
> > I thought about that, but there are a few things to figure out:
> > - how many threads to start
> 
> That's a tough question.  Right now you start roughly as many threads
> as there are async devices.  That seems like overkill.

In fact they are substantially fewer than that, for the following reasons.

First, the async framework will not start more than MAX_THREADS threads,
which is 256 at the moment.  This number is less than the number of async
devices to handle on an average system.

Second, no new async threads are started while the main thread is handling the
sync devices , so the existing threads have a chance to do their job.  If
there's a "cluster" of sync devices in dpm_list, the number of async threads
running is likely to drop rapidly while those devices are being handled.
(BTW, if there were no sync devices, the whole thing would be much simpler,
but I don't think it's realistic to assume we'll be able to get rid of them any
time soon).

Finally, but not least importantly, async threads are not started for the
async devices that were previously handled "out of order" by the already
running async threads (or by async threads that have already finished).  My
testing shows that there are quite a few of them on the average.  For example,
on the HP nx6325 typically there are as many as 580 async devices handled "out
of order" during a _single_ suspend-resume cycle (including the "early" and
"late" phases), while only a few (below 10) devices are waited for by at least
one async thread.

I can try to monitor the number of asyn threads started if you're interested.

> I would expect that a reasonably small number of threads would suffice 
> to achieve most of the possible time savings.  Something on the order 
> of 10 should work well.  If the majority of the time is spent 
> handling N devices then N+1 threads would be enough.  Judging from some 
> of the comments posted earlier, even 4 threads would give a big 
> advantage.

That unfortunately is not the case with the set of async devices including
PCI, ACPI and serio devices only.  The average time savings are between 5% to
14%, depending on the system and the phase of the cycle (the relative savings
are typically greater for suspend).  Still, that amounts to .5 s in some cases.

> > - when to start them
> 
> You might as well start them at the beginning of dpm_resume and 
> dpm_resume_noirq.  That way they can overlap with the synchronous 
> operations.

In that case they would have to wait in the beginning, so I'd need a mechanism
to wake them up.

Alternatively, there could be a limit to the number of async threads started
within the current design, but I'd prefer to leave that to the async framework
(namely, if MAX_THREADS makes sense for boot, it's also likely to make sense
for PM).

> > - stop condition
> 
> When an error occurs or when op_started has been set for every 
> remaining async device.

Yeah, that's the easy one. :-)

> > I had a few ideas, but then I thought it would be simpler to start an async
> > thread when we know there's some async work to do (ie. there's an async
> > device to handle) and make each async thread browse the list just once (the
> > rationale is that we've just handled a device, so there's a chance there are
> > some devices with satisfied dependencies down the list).
> 
> It comes down to this: Should there be many threads, each of which 
> browses the list only once, or should there be a few threads, each of 
> which browses the list many times?

Well, quite obviously I prefer the many threads version. :-)

Thanks,
Rafael
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux