On Fri, Oct 12, 2012 at 9:51 PM, Oliver Neukum <oneukum@xxxxxxx> wrote: > On Thursday 11 October 2012 10:36:22 Alan Stern wrote: > >> It's worse than you may realize. When a SCSI disk is suspended, all of >> its ancestor devices may be suspended too. Pages can't be read in from >> the drive until all those ancestors are resumed. This means that all >> runtime resume code paths for all drivers that could be bound to an >> ancestor of a block device must avoid GFP_KERNEL. In practice it's >> probably easiest for the runtime PM core to use tsk_set_allowd_gfp() >> before calling any runtime_resume method. >> >> Or at least, this will be true when sd supports nontrivial autosuspend. > > Up to now, I've found three driver for which tsk_set_allowd_gfp() wouldn't > do the job. They boil down into two types of errors. That is surprisingly good. Looks all are very good examples, :-) > > First we have workqueues. bas-gigaset is a good example. > The driver kills a scheduled work in pre_reset(). If this is done synchronously > the driver may need to wait for a memory allocation inside the work. > In principle we could provide a workqueue limited to GFP_NOIO. Is that worth > it, or do we just check? The easiest way is to always call tsk_set_allowd_gfp(~GFP_IOFS) in the start of work function under the situation, and restore the flag in the end of the work function. > > Second there is a problem just like priority inversion with realtime tasks. > usb-skeleton and ati_remote2 > They take mutexes which are also taken in other code paths. So the error > handler may need to wait for a mutex to be dropped which can only happen > if a memory allocation succeeds, which is waiting for the error handler. Suppose mutex_lock(A) is called in pre_reset(), one solution is that always calling tsk_set_allowd_gfp(~GFP_IOFS) before each mutex_lock(A). We can do it only for devices with storage interface in current configuration. Thanks, -- Ming Lei -- To unsubscribe from this list: send the line "unsubscribe linux-usb" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html