On Tue, Jun 8, 2010 at 8:05 PM, mark gross <640e9920@xxxxxxxxx> wrote: > On Tue, Jun 08, 2010 at 04:03:20PM -0700, Bryan Huntsman wrote: >> >>http://lkml.org/lkml/2010/4/22/213 (I guess the details are in the >> >>archives) I'm happy to re-visit it. >> >> >> > >> >Interesting patch, it looks like having a "system wide bus" doesn't >> >easily apply to msm and tegra platforms. >> >An example of some things I would like to be able to control are i2c >> >and memory bus. >> > >> >I'm tempted to suggest adding two types memory and i2c but I'm not >> >sure how future proof this will be given the growing complexity in the >> >embedded hardware road-map. >> >What about the possibility of registering not one but several buses? >> >You could add a bus qos param, with a type enum, or bind to some >> >platform_driver or bus_driver >> > >> >Then there's the issue of having to deal with platform specific buses, >> >do you add this type to pm qos with only one user? Or have some >> >platform bus types defined somewhere. The generic code of min / max >> >for resource X can be useful so everyone doesn't spin their own >> >resource framework in their own architecture. >> > >> >-- Mike >> >> Mike, one idea I'm exploring is having platform-specific busses with >> QoS constraints specified via runtime_pm as part of the LDM. Adding >> dynamic class creation within pm_qos, or a type enum as you suggest, >> would work. However, I think this kind of behavior would fit nicely >> within runtime_pm. >> > > Something like that is what Kevin Hilman was thinking too. It would > bring a qos concept to the LDM for each bus driver object. Picking what > qos parameters to use (I recommend latency and bandwidth) and how "local" > the parameters effect of these bus_qos interfaces are. > Are you thinking of having a (possible) pm qos constraint for each struct device_driver? Or struct bus_type ? This would probably work if for something like i2c. I'm not sure how this would work for memory bus. If you did not want to tie memory bus performance to cpu speeds, since (at least from what I"ve seen in omap / msm / tegra) there's no device_driver for a memory bus clock, but I could be wrong so someone correct me if I'm mistaken. Typically I've seen (on msm / tegra / omap) if cpu is running at frequency X, then set mem bus clock to Y. Which leads to a bunch of hacks with drivers requesting frequency X, when really they need the faster memory speed. Perhaps both per bus-type pm qos parameter as well as a new global memory bus (per cpu for numa systems?) parameter. I'm worried about trying to over-engineer a solution here for non-existing (or non-interested) customers. Ideally something that will fit our needs with Android on msm / omap / tegra platforms but still flexible enough for non-SOC. -- Mike > They are not the same as the more global system wide pm_qos parameters, > and they would be unlikely things ever exposed to usermode. > I don't think we care about exposing this to userspace for our needs. > Yes I think something like this is inevitable and will happen. But we > need some good applications to roll out the idea with. (I think.) > > --mgross > > _______________________________________________ linux-pm mailing list linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/linux-pm