Re: [PATCH] cpuidle: don't wakeup processor when set a longer latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you. Patch is updated.

>From 2d0b4afb5461847dcdf08a87b02015d061b12e85 Mon Sep 17 00:00:00 2001
From: Lianwei Wang <lianwei.wang@xxxxxxxxx>
Date: Fri, 26 Apr 2013 10:59:24 +0800
Subject: [PATCH] cpuidle: wakeup processor on a smaller latency

Checking the PM-Qos latency and cpu idle sleep latency, and only
wakeup the cpu if the requested PM-Qos latency is smaller than its
idle sleep latency. This can reduce at least 50% cpu wakeup count
on PM-Qos updated.

The PM-Qos is not updated most of time, especially for home idle
case. But for some specific case, the PM-Qos may be updated too
frequently. (E.g. my measurement show that it is changed frequently
between 2us/3us/200us/200s for bootup and usb case.)

The battery current drain is measured from PMIC or battery eliminator.
Although this is just a little saving, it is still reasonable to
improve it.

Change-Id: If564fd0d9c53cf100bd85247bfd509dfeaf54c13
Signed-off-by: Lianwei Wang <lianwei.wang@xxxxxxxxx>
---
 drivers/cpuidle/cpuidle.c |   17 ++++++++++++++++-
 1 files changed, 16 insertions(+), 1 deletions(-)
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 2f0083a..a0829ad 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -18,6 +18,7 @@
 #include <linux/ktime.h>
 #include <linux/hrtimer.h>
 #include <linux/module.h>
+#include <linux/tick.h>
 #include <trace/events/power.h>

 #include "cpuidle.h"
@@ -462,11 +463,25 @@ static void smp_callback(void *v)
  * requirement.  This means we need to get all processors out of their C-state,
  * and then recalculate a new suitable C-state. Just do a cross-cpu IPI; that
  * wakes them all right up.
+ * l - > latency in us
  */
 static int cpuidle_latency_notify(struct notifier_block *b,
                unsigned long l, void *v)
 {
-       smp_call_function(smp_callback, NULL, 1);
+       int cpu, rcpu = smp_processor_id();
+       s64 s; /* sleep_length in us */
+       struct tick_device *td;
+
+       for_each_online_cpu(cpu) {
+               if (cpu == rcpu)
+                       continue;
+               td = tick_get_device(cpu);
+               s = ktime_us_delta(td->evtdev->next_event, ktime_get());
+               if ((long)l < (long)s) {
+                       smp_call_function_single(cpu, smp_callback, NULL, 1);
+               }
+       }
+
        return NOTIFY_OK;
 }

--
1.7.4.1

2013/5/10 Daniel Lezcano <daniel.lezcano@xxxxxxxxxx>:
> On 05/09/2013 09:14 AM, Lianwei Wang wrote:
>> Thank you very much. I have a quick updated patch based on your comments.
>>
>> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
>> index 2f0083a..cd1af4b 100644
>> --- a/drivers/cpuidle/cpuidle.c
>> +++ b/drivers/cpuidle/cpuidle.c
>> @@ -18,6 +18,7 @@
>>  #include <linux/ktime.h>
>>  #include <linux/hrtimer.h>
>>  #include <linux/module.h>
>> +#include <linux/tick.h>
>>  #include <trace/events/power.h>
>>
>>  #include "cpuidle.h"
>> @@ -466,7 +467,20 @@ static void smp_callback(void *v)
>>  static int cpuidle_latency_notify(struct notifier_block *b,
>>                 unsigned long l, void *v)
>>  {
>> -       smp_call_function(smp_callback, NULL, 1);
>> +       int cpu, rcpu = smp_processor_id();
>> +       s64 s;
>> +       struct tick_device *td;
>> +
>> +       for_each_online_cpu(cpu) {
>> +               if (cpu == rcpu)
>> +                       continue;
>> +               td = tick_get_device(cpu);
>> +               s = ktime_us_delta(td->evtdev->next_event, ktime_get());
>> +               if ((long)l < (long)s) {
>> +                       smp_call_function_single(cpu, smp_callback, NULL, 1);
>> +               }
>> +       }
>> +
>>         return NOTIFY_OK;
>>  }
>
> The patch sounds reasonable. A comment and explicit names for the
> variables would be nice.
>
> eg.
> l => latency
> s => sleep
>
>> Thanks,
>> Lianwei
>>
>> 2013/5/8 Daniel Lezcano <daniel.lezcano@xxxxxxxxxx>:
>>> On 05/08/2013 04:44 AM, Lianwei Wang wrote:
>>>> When a PM-Qos is updated, the cpuidle driver will wakeup all the CPUs
>>>> no matter what a latency is set. But actually it only need to wakeup
>>>> the CPUs when a shorter latency is set. In this way we can reduce the
>>>> cpu wakeup count and save battery.
>>>
>>> I am curious, how many times could the pm_qos be changed in a system
>>> live cycle to measure an improvement with this patch ?
>>>
>>> Do you have a scenario where you measured a noticeable power saving ?
>>>
>> The PM-Qos is not updated most of time, especially for home idle case.
>>  But for some specific case, the PM-Qos may update too frequently.
>> (E.g. my measurement show that it is changed frequently between
>> 2us/3us/200us/200s for bootup and usb case.) The battery current drain
>> is measured from PMIC or battery eliminator. Although this is just a
>> little saving, it is still reasonable to improve it.
>
> Thanks for the information. Can you add this information in the changelog ?
>
>>>> So we can pass the prev_value to the notifier callback and check the
>>>> latency curr_value and prev_value in the cpuidle latency notifier
>>>> callback. It modify a common interface(dummy --> prev_value) but shall
>>>> be safe since no one use the dummy parameter currently.
>>>>
>>>> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
>>>> index e1f6860..1e1758c 100644
>>>> --- a/drivers/cpuidle/cpuidle.c
>>>> +++ b/drivers/cpuidle/cpuidle.c
>>>> @@ -498,7 +498,11 @@ static void smp_callback(void *v)
>>>>  static int cpuidle_latency_notify(struct notifier_block *b,
>>>>                 unsigned long l, void *v)
>>>>  {
>>>> -       smp_call_function(smp_callback, NULL, 1);
>>>> +       unsigned long prev_value = (unsigned long) v;
>>>> +
>>>> +       /* Dont't waktup processor when set a longer latency */
>>>
>>>                     ^^^^^^
>>>                     wakeup
>>>
>>> Instead of passing prev and curr, using the dummy variable, why don't
>>> you pass the result of (curr - prev) ?
>>>
>>> A negative value means, the latency is smaller and positive is bigger.
>>>
>>> Also, may be the optimization could be more improved: if the latency is
>>> bigger than the next wakeup event, it is not necessary to wakeup the cpus.
>>>
>> This is good idea. So it need to check the next_event on each CPU and
>> wakeup the cpu if the requested latency is smaller than it. A quick
>> patch is attached.
>
> Yes, it sounds good.
>
>>>> +       if (l < prev_value)
>>>> +               smp_call_function(smp_callback, NULL, 1);
>>>>         return NOTIFY_OK;
>>>>  }
>>>>
>>>> diff --git a/kernel/power/qos.c b/kernel/power/qos.c
>>>> index 9322ff7..533b8bc 100644
>>>> --- a/kernel/power/qos.c
>>>> +++ b/kernel/power/qos.c
>>>> @@ -205,7 +205,7 @@ int pm_qos_update_target(struct pm_qos_constraints
>>>> *c, struct plist_node *node,
>>>>         if (prev_value != curr_value) {
>>>>                 blocking_notifier_call_chain(c->notifiers,
>>>>                                              (unsigned long)curr_value,
>>>> -                                            NULL);
>>>> +                                            (void *)prev_value);
>>>>                 return 1;
>>>>         } else {
>>>>                 return 0;
>>>>
>>>
>>>
>>> --
>>>  <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
>>>
>>> Follow Linaro:  <http://www.facebook.com/pages/Linaro> Facebook |
>>> <http://twitter.com/#!/linaroorg> Twitter |
>>> <http://www.linaro.org/linaro-blog/> Blog
>>>
>
>
> --
>  <http://www.linaro.org/> Linaro.org │ Open source software for ARM SoCs
>
> Follow Linaro:  <http://www.facebook.com/pages/Linaro> Facebook |
> <http://twitter.com/#!/linaroorg> Twitter |
> <http://www.linaro.org/linaro-blog/> Blog
>

Attachment: 0001-cpuidle-wakeup-processor-on-a-smaller-latency.patch
Description: Binary data


[Index of Archives]     [Linux ACPI]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [CPU Freq]     [Kernel Newbies]     [Fedora Kernel]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux