On Tue, Oct 27, 2009 at 06:37:58PM -0600, Ai Li wrote: > > How often are you calling pm_qos_update_requirement? > > > > I think calling pm_qos_ interfaces too often makes me wonder > > about my > > assumptions and your sanity. > > > > Can you explain why the pm_qos_update_requirement is getting hit > > often > > enough to bother with this change? > > > > Other than that I don't have a problem with moving to handles, > > if its a > > practical change made for reasons other than making api abuse > > less > > painful. > > > > Further, If the implicit assumption that pmqos calls are on cold > > paths > > is wrong, then perhaps more thought is needed than just changing > > things > > to handle based searches. > > > > Our embedded platforms support different low power modes. With the > modes, the deeper the sleep, the more the power savings, and the > larger the interrupt latency coming out of the low power mode. > > To help the platform achieving greatest power savings, some of our > device drivers set lateny qos only when there is a service request to > the driver or a device transaction. When the transaction or request > is done, the drivers cancel the QoS with > pm_qos_update_requirement(PM_QOS_DEFAULT_VALUE), allowing the > platform to reach a deeper sleep. > > The approach gives us good power savings. However when there are > lots of transactions, pm_qos_update_requirement() gets called a lot > of times. Oh. This will not scale with the aggregation logic very well at all if pm_qos update requirement gets hit per transaction through a driver code path, then I think some thought on the scalability is needed and perhaps a change to the aggregation design for such uses. Do you have a patch for the handle implementation I could look at? --mgross . > > ~Ai > _______________________________________________ linux-pm mailing list linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/linux-pm