>> Mike, one idea I'm exploring is having platform-specific busses with QoS >> constraints specified via runtime_pm as part of the LDM. Adding dynamic >> class creation within pm_qos, or a type enum as you suggest, would work. >> However, I think this kind of behavior would fit nicely within runtime_pm. >> > > I'm not too familiar with the current work in runtime pm and LDM. > However platform specific buses sounds like a good thing, at least > more future proof. This works in the embedded SOC world but I'm > wondering what happens when you have re-configurable hardware and now > your same peripheral is sourcing a different bus? My current thought is to have the board file enumerate the device on the proper bus. This configuration is target-specific. Each driver would register on as many buses as it needs to. Some of the driver/bus registrations would be extraneous but the driver/device binding would only be possible on one bus. > Does runtime pm hook into pm qos similar to how cpuidle uses pm qos? > So the platform-specific buses would be aa pm rumtime or a pm qos > addition? > > -- Mike Not to my knowledge. I'm considering modeling the runtime_pm enhancements after pm_qos. Meaning, there would be additional add/update/remove_requirements() callbacks added to struct dev_pm_ops for runtime_pm. Instead of having a system-wide QoS sink per QoS class, the requirements would be passed up the LDM tree. This way, a bus driver would be able to receive QoS constraints from all of it's devices and, combined with the active state of it's children from runtime_pm, be able to do some useful power management. The entire bus could be throttled or idled in this manner. This scheme would require the SW device topology to match the HW topology and not just have everything hang off the platform bus. - Bryan _______________________________________________ linux-pm mailing list linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/linux-pm