Re: [PM-WIP_CPUFREQ][PATCH 0/6 V3] Cleanups for cpufreq

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Turquette, Mike" <mturquette@xxxxxx> writes:

> On Fri, May 27, 2011 at 1:26 AM, Santosh Shilimkar
> <santosh.shilimkar@xxxxxx> wrote:
>> On 5/27/2011 11:37 AM, Menon, Nishanth wrote:
>>>
>>> On Thu, May 26, 2011 at 22:06, Santosh Shilimkar
>>> <santosh.shilimkar@xxxxxx> Âwrote:
>>>>
>>>> On 5/26/2011 11:40 PM, Kevin Hilman wrote:
>>>>>
>>>>> So here's a dumb question, being rather ignorant of CPUfreq on SMP.
>>>>>
>>>>> Should we be running a CPUfreq instance on both CPUs when they cannot be
>>>>> scaled independently?
>>>>>
>>>>> What is being scaled here is actually the cluster (the MPU SS via
>>>>> dpll_mpu_ck), not an individual CPU. ÂSo to me, it only makes sense to
>>>>> have a an instance of the driver per scalable device, which in this case
>>>>> is a single MPU SS.
>>>>>
>>>> We are running only one instance and for the exact same reason as above.
>>>> You are completely right and that's the whole intention of those
>>>> CPUMASK two lines in the initialization code.
>>>>
>>>>
>>>>> What am I missing?
>>>>>
>>>> Not at all.
>>>
>>> So not have cpufreq driver registered at all for CPU1? Life would be a
>>> lot simpler in omap2-cpufreq.c as a result. but that said, two views:
>>> a) future silicon somewhere ahead might need the current
>>> infrastructure to scale into different tables..
>>> b) as far as userspace sees it - cpu0 and cpu1 exists, cool, *but*
>>> cpu1 is not scalable(no /sys/devices/system/cpu/cpu1/cpufreq.. but
>>> .../cpu1/online is present). Keep in mind that userspace is usually
>>> written chip independent. IMHO registering drivers for both devices do
>>> make sense they reflect what the reality of the system is. 2 cpus
>>> scaling together - why do we want to OMAP "specific" stuff here?
>>>
>> It's not OMAP specific Nishant.
>> And this feature is supported by CPUFREQ driver. My Intel machine
>> uses the same exact scheme for CPUFREQ. It's feature provided
>> specifically for hardwares with individual CPU scaling
>> limitation. Instead of CPU's, whole CPU cluster scales
>> together.
>>
>> Both CPU's still have same consistent view of all CPUFREQ controls.
>> But in Âback-ground, CPU1 is carrying only symbolic links.
>>
>> We make use of "related/affected cpu" feature which is
>> supported by generic CPUFREQ driver. Nothing OMAP-specific
>> here.
>
> Santosh is referring to this code in our cpufreq driver:
>
>         /*
>          * On OMAP SMP configuartion, both processors share the voltage
>          * and clock. So both CPUs needs to be scaled together and hence
>          * needs software co-ordination. Use cpufreq affected_cpus
>          * interface to handle this scenario. Additional is_smp() check
>          * is to keep SMP_ON_UP build working.
>          */
>         if (is_smp()) {
>                 policy->shared_type = CPUFREQ_SHARED_TYPE_ANY;
>                 cpumask_or(cpumask, cpumask_of(policy->cpu), cpumask);
>                 cpumask_copy(policy->cpus, cpumask);
>         }
>
> policy->cpus knows about each CPU now (in fact, due to this you will
> see /sys/devices/system/cpu/cpu1/cpufreq is in fact a symlink to its
> cpu0 sibling!)
>
> This is pretty good in fact, since governors like ondemand take into
> consideration *all* of the CPUs in policy->cpus:
>
>         /* Get Absolute Load - in terms of freq */
>         max_load_freq = 0; <- tracks greatest need across all CPUs
>
>         for_each_cpu(j, policy->cpus) {
>                 ... find max_load_freq ...
>
> Ultimate effect is that we run a single workqueue only on CPU0
> (kondemand or whatever) that takes the load characteristics of both
> CPU0 and CPU1 into account.

OK, makes sense.  Thanks for the description.

All of this came up because this series is going through contortions to
make two CPUfreq registrations only using one freq_table, protect
against concurrent access from different CPUs etc.,  which led me to
wonder why we need two registrations.

Kevin
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Arm (vger)]     [ARM Kernel]     [ARM MSM]     [Linux Tegra]     [Linux WPAN Networking]     [Linux Wireless Networking]     [Maemo Users]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux