Hi Kevin, On 05/31/2012 05:36 PM, Kevin Hilman wrote: > Jon Hunter <jon-hunter@xxxxxx> writes: > >> Hi Kevin, >> >> On 05/31/2012 03:42 PM, Kevin Hilman wrote: >>> Jon Hunter <jon-hunter@xxxxxx> writes: >>> >>>> Hi Kevin, Will, >>>> >>>> On 05/30/2012 08:29 PM, Will Deacon wrote: >>>>> Hi Kevin, >>>>> >>>>> On Wed, May 30, 2012 at 10:50:01PM +0100, Kevin Hilman wrote: >>>>>> Basically, I don't like the result when we have to hack around missing >>>>>> runtime PM support for a driver, so IMO, the driver should be updated. >>>>>> >>>>>> IOW, it looks to me like the armpmu driver should grow runtime PM >>>>>> support. The current armpmu_release|reserve should probably be replaced >>>>>> with runtime PM get/put, and the functionality in those functions would >>>>>> be the runtime PM callbacks instead. >>>>>> >>>>>> Will, any objections to armpmu growing runtime PM support? >>>>> >>>>> My plan for the armpmu reservation is to kill the global reservation scheme >>>>> that we currently have and push those function pointers into the arm_pmu, >>>>> so that fits with what you'd like. >>>>> >>>>> The only concern I have is that we need the mutual exclusion even when we >>>>> don't have support for runtime PM. If we can solve that then I'm fine with >>>>> the approach. >>>> >>>> To add a bit more food for thought, I had implemented a quick patch to >>>> add runtime PM support for PMU. You will notice that I have been >>>> conservative on where I have placed the pm_runtime_get/put calls, >>>> because I am not too familiar with the PMU driver to know exactly >>>> where we need to maintain the PMU context. So right now these are just >>>> around the reserve_hardware/release_hardware calls. This works on OMAP >>>> for some quick testing. However, I would need to make sure this does >>>> not break compilation without runtime PM enabled. >>>> >>>> Let me know your thoughts. >>> >>> That looks good, but I'm curious what would be done in the new >>> plat->runtime_* hooks. Maybe the irq enable/disable stuff in the pmu >>> driver needs to be moved into the runtime PM hooks? >> >> For omap4, the plat->runtime_* hooks look like ... >> >> +static int omap4_pmu_runtime_resume(struct device *dev) >> +{ >> + /* configure CTI0 for PMU IRQ routing */ >> + cti_unlock(&omap4_cti[0]); >> + cti_map_trigger(&omap4_cti[0], 1, 6, 2); >> + cti_enable(&omap4_cti[0]); >> + >> + /* configure CTI1 for PMU IRQ routing */ >> + cti_unlock(&omap4_cti[1]); >> + cti_map_trigger(&omap4_cti[1], 1, 6, 3); >> + cti_enable(&omap4_cti[1]); >> + >> + return 0; >> +} >> + >> +static int omap4_pmu_runtime_suspend(struct device *dev) >> +{ >> + cti_disable(&omap4_cti[0]); >> + cti_disable(&omap4_cti[1]); >> + >> + return 0; >> +} >> >> This is what I have implemented so far and currently testing. So really >> just using the hooks to configure the cross triggering interface. >> >> Is this what you were thinking? >> > > Basically, yes. > > But since I haven't studied the PMU driver closely, I have some dumb > questions. My concern is that these look bsically like the > plat->irq_[enable|disable] hooks, so I guess the root of my question is > do we need both the irq enable/disable and runtime suspend/resume hooks > in plat? or can we get by with one set. No you are right. The way it is now we could get by with just the one of hooks. However, the main reason I added the new hooks would be if there are other places we can call the pm_runtime_* functions. I am not too familiar with the flow in which the functions are called in the PMU driver and so this was a simple attempt to push the PM runtime framework in the PMU driver. Hmmm ... however, now looking at the history behind the plat->irq_* hooks, I see that Ming specifically added these for omap4 [1]. I was under the impression other architectures may be using this. I guess not. So if it is preferred we could do-away with the plat->irq_* and replace with the plat->runtime_*. Cheers Jon [1] http://marc.info/?l=linux-arm-kernel&m=131946766428315&w=2 -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html