On 09/17/2012 10:50 PM, Rafael J. Wysocki wrote: > On Monday, September 17, 2012, Daniel Lezcano wrote: >> On 09/08/2012 12:17 AM, Rafael J. Wysocki wrote: >>> On Friday, September 07, 2012, Daniel Lezcano wrote: >>>> Since commit 46bcfad7a819bd17ac4e831b04405152d59784ab, >>>> cpuidle: Single/Global registration of idle states >>>> >>>> we have a single registration for the cpuidle states which makes >>>> sense. But now two new architectures are coming: tegra3 and big.LITTLE. >>>> >>>> These architectures have different cpus with different caracteristics >>>> for power saving. High load => powerfull processors, idle => small processors. >>>> >>>> That implies different cpu latencies. >>>> >>>> This patchset keeps the current behavior as introduced by Deepthi without >>>> breaking the drivers and add the possibility to specify a per cpu states. >>>> >>>> * Tested on intel core 2 duo T9500 >>>> * Tested on vexpress by Lorenzo Pieralsi >>>> * Tested on tegra3 by Peter De Schrijver >>>> >>>> Daniel Lezcano (6): >>>> acpi : move the acpi_idle_driver variable declaration >>>> acpi : move cpuidle_device field out of the acpi_processor_power >>>> structure >>>> acpi : remove pointless cpuidle device state_count init >>> >>> I've posted comments about patches [1-3/6] already. In short, I don't like >>> [1/6], [2/6] would require some more work IMO and I'm not sure about the >>> validity of the observation that [3/6] is based on. >>> >>> Yes, I agree that the ACPI processor driver as a whole might be cleaner >>> and it probably would be good to spend some time on cleaning it up, but >>> not necessarily in a hurry. >>> >>> Unfortunately, I also don't agree with the approach used by the remaining >>> patches, which is to try to use a separate array of states for each >>> individual CPU core. This way we end up with quite some duplicated data >>> if the CPU cores in question actually happen to be identical. >> >> Actually, there is a single array of states which is defined with the >> cpuidle_driver. A pointer to this array from the cpuidle_device >> structure is added and used from the cpuidle core. >> >> If the cpu cores are identical, this pointer will refer to the same array. > > OK, but what if there are two (or more) sets of cores, where all cores in one > set are identical, but two cores from different sets differ? A second array is defined and registered for these cores with the cpuidle_register_states function. Let's pick an example with the big.LITTLE architecture. There are two A7 and two A15, resulting in the code on 4 cpuidle_device structure (eg. dev_A7_1, dev_A7_2, dev_A15_1, dev_A15_2). Then the driver registers a different cpu states array for the A7s and the A15s At the end, dev_A7_1->states points to the array states 1 dev_A7_2->states points to the array states 1 dev_A15_1->states points to the array states 2 dev_A15_2->states points to the array states 2 It is similar with Tegra3. I think Peter and Lorenzo already wrote a driver based on this approach. Peter, Lorenzo any comments ? The single registration mechanism introduced by Deepthi is kept and we have a way to specify different idle states for different cpus. > In that case it would be good to have one array of states per set, but the > patch doesn't seem to do that, does it? Yes, this is what does the patch. >> Maybe I misunderstood you remark but there is no data duplication, that >> was the purpose of this approach to just add a pointer to point to a >> single array when the core are identical and to a different array when >> the cores are different (set by the driver). Furthermore, this patch >> allows to support multiple cpu latencies without impacting the existing >> drivers. > > Well that's required. :-) Yes :) >>> What about using a separate cpuidle driver for every kind of different CPUs in >>> the system (e.g. one driver for "big" CPUs and the other for "little" ones)? >>> >>> Have you considered this approach already? >> >> No, what would be the benefit of this approach ? > > Uniform handling of all the CPUs of the same kind without data duplication > and less code complexity, I think. > >> We will need to switch >> the driver each time we switch the cluster (assuming all it is the bL >> switcher in place and not the scheduler). IMHO, that could be suboptimal >> because we will have to (un)register the driver, register the devices, >> pull all the sysfs and notifications mechanisms. The cpuidle core is not >> designed for that. > > I don't seem to understand how things are supposed to work, then. Sorry, I did not suggest that. I am wondering how several cpuidle drivers can co-exist together in the state of the code. Maybe I misunderstood your idea. The patchset I sent is pretty simple and do not duplicate the array states. That would be nice if Len could react to this patchset (4/6,5/6, and 6/6). Cc'ing him to its intel address. > What _exactly_ do you mean by "the bL switcher", for instance? The switcher is in charge of migrating tasks from the A7 to A15 (and vice versa) depending on the system load and make the one cluster up and visible while the other is not visible [1]. [1] www.arm.com/files/downloads/big.LITTLE_Final.pdf -- <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs Follow Linaro: <http://www.facebook.com/pages/Linaro> Facebook | <http://twitter.com/#!/linaroorg> Twitter | <http://www.linaro.org/linaro-blog/> Blog -- To unsubscribe from this list: send the line "unsubscribe linux-acpi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html