Re: [RFC PATCH] PM: Introduce generic DVFS framework with device-specific OPPs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 27, 2011 at 10:46 PM, Rafael J. Wysocki <rjw@xxxxxxx> wrote:
> On Wednesday, April 27, 2011, MyungJoo Ham wrote:
>> Hello,
>>
>> 2011/4/26 Rafael J. Wysocki <rjw@xxxxxxx>:
>> > Hi,
>> >
> ...
>> >> +
>> >> +/**
>> >> + * dvfs_tickle_device() - Guarantee maximum operation speed for a while
>> >> + * Â Â Â Â Â Â Â Â Â instaneously.
>> >> + * @dev: Â Â the device to be tickled.
>> >> + * @duration_ms: Â Â the duration of tickle effect.
>> >> + *
>> >> + * Tickle sets the device at the maximum frequency instaneously and
>> >> + * the maximul frequency is guaranteed to be used for the given duration.
>> > Â Â Â Â Â^^^^^^^
>> > I guess that should be "maximum".
>>
>> Yes. :)
>>
>> >
>> >> + * For faster user reponse time, an input event may tickle a related device
>> >> + * so that the input event does not need to wait for the DVFS to react with
>> >> + * normal interval.
>> >> + */
>> >
>> > Can you explain how this function is supposed to be used, please?
>>
>> Ok, I'll show you a usage example
>>
>> The system: a touchscreen based computer with DVFS'able GPU.
>> Use case: a user scrolls the screen with fingers suddenly. (think of
>> an iPhone or Android device's home screen filled wit icons)
>> While there is no touchscreen input (thus, the screen does not move
>> fast), we do not use GPU too much; thus it is scaled to run at slower
>> rate (let's say 100MHz).
>> If a user suddenly touches screen and starts moving, the screen should
>> follow it; thus, requires much output from GPU.
>> With normal DVFS-GPU, it would take some time to speed it up because
>> it should monitor the GPU usage and react afterwards.
>>
>> With the API given, "tickle" may be included in the user input event.
>> So that GPU can be scaled to maximum frequency immediately.
>
> So in this case the input subsystem would be calling dvfs_tickle_device()
> (or whatever it is finally called) for the GPU, right?

Yes, that's correct. The input subsystem should somehow "tickle" a
DVFS device (GPU in this case). With CPUFREQ-Tickle (never upstreamed
anyway), we have been doing this by adding tickle as an "input event"
at board file (i.e., arch/arm/mach-*/mach-*.c).

>
> I'm afraid you'd need a user space interface for that and let the
> application (or library) handling the touchpad input control the "tickle".
> Otherwise it would be difficult to arrange things correctly (the association
> between the input subsystem and the GPU is not obvious at the kernel level).
>

Creating a user space interface to connect an input event (or any
other user space event) with a DVFS (like
/sys/devices/system/cpu/cpux/cpufreq/...) will provide the flexibility
for user space to tickle although it might not be as efficient as the
approach in the previous paragraph in the metric of how much response
time is reduced. We may do that by allowing something like this: "echo
0 > /sys/class/devfreq/devname.0/tickle" or even "echo0 >
/sys/class/devfreq/tickle_all"

> Ideally, though, the working OPP for the GPU should depend on the number of
> processing requests per a unit of time, pretty much like for a CPU.

As long as we measure workload (number of requests or operations per
unit time) periodically, we have the issue mentioned that can be
mitigated by tickling. Ideally, if we can count each operation at
device driver, we do not need tickling because we can monitor the
workload with per-operation basis not per-sampling-time basis. If it
is a device with .. like .. 1000 operations/sec, it might be feasible.
However, GPUs may have like over 100,000,000 operations/sec and we'd
better just monitor them periodically. Besides, devices including many
GPUs allow DMA and user-addressable memory without device driver
intervention for normal operations once the device is configured.
Thus, device driver wouldn't know anything about per-operation basis;
we need to stick with periodic monitoring anyway (and with delayed
DVFS issue) unless a GPU can invoke an interrupt of "I'm overloaded,
let me work faster"

>
>> If the monitoring frequency is 50ms, DVFS will likely to react with
>> about 75ms of delay.
>> For example, if the touch event has occurred at 25ms after the last
>> DVFS monitoring event, at the next DVFS monitoring event, the measured
>> load will be about 50%, which usually won't require DVFS to speed up.
>> Then, DVFS will react at the next monitoring event; thus, it took 50ms
>> + 25ms.
>>
>> 75ms is enough to give some impression to users. These days, many
>> mobile devices require 60Hz-based UI, which is about 16ms.
>>
>> Anyway, this happens with drivers/cpufreq also. We have been testing
>> "tickling" associated with drivers/cpufreq/cpufreq.c. This has been
>> reduced user response time significantly removing the need for tuning
>> the threshold values.
>
> I think I understand the problem, but I'm not sure if there's a clean way
> to trigger the "tickle" from the kernel level.

I'm considering the followings (and they are not mutually exclusive).
How about them?
1. (the way that we've been using) Add a tickle call to input events
at board file.
2. provide sysfs interface that triggers tickling along with devfreq sysfs.

>
> Thanks,
> Rafael
> _______________________________________________
> linux-pm mailing list
> linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx
> https://lists.linux-foundation.org/mailman/listinfo/linux-pm
>


Cheers!
- MyungJoo
-- 
MyungJoo Ham (íëì), Ph.D.
Mobile Software Platform Lab,
Digital Media and Communications (DMC) Business
Samsung Electronics
cell: 82-10-6714-2858
_______________________________________________
linux-pm mailing list
linux-pm@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linux-foundation.org/mailman/listinfo/linux-pm



[Index of Archives]     [Linux ACPI]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [CPU Freq]     [Kernel Newbies]     [Fedora Kernel]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux