On Tue, Jul 17, 2018 at 02:34:47PM -0400, Lyude Paul wrote: > On Tue, 2018-07-17 at 20:32 +0200, Lukas Wunner wrote: > > On Tue, Jul 17, 2018 at 02:24:31PM -0400, Lyude Paul wrote: > > > On Tue, 2018-07-17 at 20:20 +0200, Lukas Wunner wrote: > > > > Okay, the PCI device is suspending and the nvkm_i2c_aux_acquire() > > > > wants it in resumed state, so is waiting forever for the device to > > > > runtime suspend in order to resume it again immediately afterwards. > > > > > > > > The deadlock in the stack trace you've posted could be resolved using > > > > the technique I used in d61a5c106351 by adding the following to > > > > include/linux/pm_runtime.h: > > > > > > > > static inline bool pm_runtime_status_suspending(struct device *dev) > > > > { > > > > return dev->power.runtime_status == RPM_SUSPENDING; > > > > } > > > > > > > > static inline bool is_pm_work(struct device *dev) > > > > { > > > > struct work_struct *work = current_work(); > > > > > > > > return work && work->func == dev->power.work; > > > > } > > > > > > > > Then adding this to nvkm_i2c_aux_acquire(): > > > > > > > > struct device *dev = pad->i2c->subdev.device->dev; > > > > > > > > if (!(is_pm_work(dev) && pm_runtime_status_suspending(dev))) { > > > > ret = pm_runtime_get_sync(dev); > > > > if (ret < 0 && ret != -EACCES) > > > > return ret; > > > > } > > > > > > > > But here's the catch: This only works for an *async* runtime suspend. > > > > It doesn't work for pm_runtime_put_sync(), pm_runtime_suspend() etc, > > > > because then the runtime suspend is executed in the context of the caller, > > > > not in the context of dev->power.work. [snip] > > Something I'm curious about. This isn't the first time I've hit a > situation like this (see: the improper disable_depth fix I added into > amdgpu I now need to go and fix), which makes me wonder: is there > actually any reason Linux's runtime PM core doesn't just turn get/puts() > in the context of s/r callbacks into no-ops by default? So the PM core could save a pointer to the "current" task_struct in struct device before invoking the ->runtime_suspend or ->runtime_resume callback, and all subsequent rpm_resume() and rpm_suspend() calls could then become no-ops if "current" is equivalent to the saved pointer. (This is also how you could solve the deadlock you're dealing with for sync suspend.) For a recursive resume during a resume or a recursive suspend during a suspend, this might actually be fine. For a recursive suspend during a resume or a recursive resume during a suspend, things become murkier: How should the PM core know if the particular part of the device is still accessible when hitting a recursive resume during a suspend? Let's say a clock is needed for i2c. Then the recursive resume during a suspend may only become a no-op before that clock has been turned off. That's something only the device driver itself has knowledge about, because it implements the order in which subdevices of the GPU are turned off. Lukas _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel