On Mon, 21 Jun 2010, Kevin Hilman wrote: > Paul Walmsley <paul@xxxxxxxxx> writes: > > > As far as I can tell, it's not safe for upper-layer code to idle a device > > like this. The driver itself needs to be aware of the device's idle > > state. > > The driver is made aware using the standard dev_pm_ops callback for > suspend and resume. I guess the intent of your patch is to avoid having to patch in omap_device_{idle,enable}() into a bunch of driver DPM suspend/resume callbacks? If so, some comments: - dev_pm_ops->suspend_noirq() is intended for use on devices with shared interrupts. Right now the OMAP2+ codebase doesn't use any shared interrupts for on-chip devices, as far as I can see. It looks like you use ->suspend_noirq() to ensure your code runs after the existing driver suspend functions. Using ->suspend_noirq() for this purpose seems like a hack. A better place for that code would be in a bus->pm->suspend() callback, which will run after the device's dev_pm_ops->suspend() callback. - It is not safe to call omap_device_{idle,enable}() from DPM callbacks as the patch does, because they will race with runtime PM calls to omap_device_{idle,enable}(). (This is part of the reason why it's important to be anal about enforcing the omap_device_{enable,idle,shutdown}() state transition limitations - so we get warned early when these types of conflicts appear.) omap_device*() calls should either be in runtime PM functions, or DPM ->suspend/resume() callbacks, but not both. Since we've decided to focus on runtime PM power management as our primary driver power management approach, and we expect drivers to aggressively idle their devices, it seems best to keep the omap_device*() functions in runtime PM code. At that point, the common DPM suspend/idle callbacks that you propose can simply block until runtime PM indicates that the device is idle. For example, you could use a per-omap_device mutex. A sketch of a sample implementation follows this message. - Paul In omap_device.c, add int omap_device_suspend_mpu_irqs(struct omap_device *od) { /* * For each hwmod, for each MPU IRQ, save current state, * disable_irq(irq) */ } int omap_device_resume_mpu_irqs(struct omap_device *od) { /* * For each hwmod, for each MPU IRQ, enable_irq(irq) if * it was previously */ } In your bus->pm->suspend()/resume() functions: int omap_bus_suspend(struct device *dev, pm_message_t state) { dev_dbg("waiting for device %s to go idle\n", dev_name(dev)); mutex_lock(&od->active_mutex); /* Device guaranteed to be idle at this point */ /* * do anything else necessary to ensure that driver code is not * entered after the unlock() */ omap_device_suspend_mpu_irqs(od); mutex_unlock(&od->active_mutex); return 0; } int omap_bus_resume(struct device *dev, pm_message_t state) { mutex_lock(&od->active_mutex); /* Device guaranteed to be idle at this point */ /* * do anything else necessary to ensure that driver code can * be entered after the unlock() */ omap_device_resume_mpu_irqs(od); mutex_unlock(&od->active_mutex); return 0; } Then in the omap_device code, add a mutex active_mutex in struct omap_device, init the mutex in omap_device_build_ss(), then: int omap_device_enable(struct platform_device *pdev) { struct omap_device *od; od = _find_by_pdev(pdev); ... WARN(!mutex_trylock(&od->active_mutex), "State transition violation - should never happen\n"); mutex_lock(&od->active_mutex); ... } int omap_device_idle(struct platform_device *pdev) { struct omap_device *od; od = _find_by_pdev(pdev); ... mutex_unlock(&od->active_mutex); ... } int omap_device_shutdown(struct platform_device *pdev) { struct omap_device *od; od = _find_by_pdev(pdev); ... mutex_unlock(&od->active_mutex); ... } The driver needs the usual: - add a DPM ->suspend() function that tries to stop whatever the device is doing if it's in the middle of something that can be stopped; - ensure all paths that start new I/O check dev->power.status, and return an error if it is DPM_SUSPENDING - Paul -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html