Hi Lorenzo, Lorenzo Pieralisi <lorenzo.pieralisi@xxxxxxx> writes: > + - latency > + Usage: Required > + Value type: <u32> > + Definition: Worst case latency in microseconds required to > + enter and exit the C-state. > + > + - min-residency > + Usage: Required > + Value type: <u32> > + Definition: Time in microseconds required for the CPU to be in > + the C-state to make up for the dynamic power > + consumed to enter/exit the C-state in order to > + break even in terms of power consumption compared > + to C1 state (wfi). > + This parameter depends on the operating conditions > + (operating point, cache state) and must assume > + worst case scenario. I have a concern with these. I know it is not the fault of this patch as these parameters are what current cpuidle governor/driver interface uses, but.. Power state entry/exit latencies can be vary quite a lot. Especially CPU and memory frequencies affect them as can e.g. PMIC properties. Also power level during entry/exit depends on clocks and voltages. Also the power level of a sleep state can be context dependent (clocks and voltages). These mean that also the minimum residency for energy break even varies. Defining a minimum residency against C1 is a bit arbitrary. There is no guarantee that the break even order of idle states remains constant over device context changes. I have not really properly thought through this but here's an idea.. how about an alternative interface between governor and driver? The cpuidle core would provide the expected wakeup time and currently enforced minimum latency to the driver and the driver would make the decision about the state to choose. --Antti -- To unsubscribe from this list: send the line "unsubscribe devicetree" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html