Re: CPU excessively long times between frequency scaling driver calls - bisected

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 1, 2022 at 6:18 PM Doug Smythies <dsmythies@xxxxxxxxx> wrote:
>
> On Tue, Mar 1, 2022 at 3:58 AM Rafael J. Wysocki <rafael@xxxxxxxxxx> wrote:
> > On Tue, Mar 1, 2022 at 6:53 AM Feng Tang <feng.tang@xxxxxxxxx> wrote:
> > > On Mon, Feb 28, 2022 at 08:36:03PM +0100, Rafael J. Wysocki wrote:
> ...
> > > >
> > > > However, it was a bit racy, so maybe it's good that it was not complete.
> > > >
> > > > Below is a new version.
> > >
> > > Thanks for the new version. I just gave it a try,  and the occasional
> > > long delay of cpufreq auto-adjusting I have seen can not be reproduced
> > > after applying it.
> >
> > OK, thanks!
> >
> > I'll wait for feedback from Dough, though.
>
> Hi Rafael,
>
> Thank you for your version 2 patch.
> I screwed up an overnight test and will have to re-do it.
> However, I do have some results.

Thanks for testing it!

> From reading the patch code, one worry was the
> potential to drive down the desired/required CPU
> frequency for the main periodic workflow, causing
> overruns, or inability of the task to complete its
> work before the next period.

It is not clear to me why you worried about that just from reading the
patch?  Can you explain, please?

> I have always had overrun
> information, but it has never been relevant before.
>
> The other worry was if the threshold of
> turbo/not turbo frequency is enough.

Agreed.

> I do not know how to test any final solution
> thoroughly, as so far I have simply found a
> good enough problematic example.
> We have so many years of experience with
> the convenient multi second NMI forcing
> lingering high pstate clean up. I'd keep it
> deciding within it if the TSC stuff needs to be
> executed or not.
>
> Anyway...
>
> Base Kernel 5.17-rc3.
> "stock" : unmodified.
> "revert" : with commit b50db7095fe reverted
> "rjw-2" : with this version2 patch added.
>
> Test 1 (as before. There is no test 2, yet.):
> 347 Hertz work/sleep frequency on one CPU while others do
> virtually no load but enough to increase the requested pstate,
> but at around 0.02 hertz aggregate.
>
> It is important to note the main load is approximately
> 38.6% @ 2.422 GHz, or 100% at 0.935 GHz.
> and almost exclusively uses idle state 2 (of
> 4 total idle states)
>
> /sys/devices/system/cpu/cpu7/cpuidle/state0/name:POLL
> /sys/devices/system/cpu/cpu7/cpuidle/state1/name:C1_ACPI
> /sys/devices/system/cpu/cpu7/cpuidle/state2/name:C2_ACPI
> /sys/devices/system/cpu/cpu7/cpuidle/state3/name:C3_ACPI
>
> Turbostat was used. ~10 samples at 300 seconds per.
> Processor package power (Watts):
>
> Workflow was run for 1 hour each time or 1249201 loops.
>
> revert:
> ave: 3.00
> min: 2.89
> max: 3.08

I'm not sure what the above three numbers are.

> ave freq: 2.422 GHz.
> overruns: 1.
> max overrun time: 113 uSec.
>
> stock:
> ave: 3.63 (+21%)
> min: 3.28
> max: 3.99
> ave freq: 2.791 GHz.
> overruns: 2.
> max overrun time: 677 uSec.
>
> rjw-2:
> ave: 3.14 (+5%)
> min: 2.97
> max: 3.28
> ave freq: 2.635 GHz

I guess the numbers above could be reduced still by using a P-state
below the max non-turbo one as a limit.

> overruns: 1042.
> max overrun time: 9,769 uSec.

This would probably get worse then, though.

ATM I'm not quite sure why this happens, but you seem to have some
insight into it, so it would help if you shared it.

Thanks!



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux