Re: [RFC 2/2] selftest/cpuidle: Add support for cpuidle latency measurement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Doug,
Thanks for trying these patches out.

On 18/03/21 2:30 am, Doug Smythies wrote:
Hi Pratik,

It just so happens that I have been trying Artem's version this last
week, so I tried yours.

On Mon, Mar 15, 2021 at 4:49 AM Pratik Rajesh Sampat
<psampat@xxxxxxxxxxxxx> wrote:
...
To run this test specifically:
$ make -C tools/testing/selftests TARGETS="cpuidle" run_tests
While I suppose it should have been obvious, I interpreted
the "$" sign to mean I could run as a regular user, which I can not.

Ah yes, this does need root privileges, I should have prefixed the command with
sudo in the instructions for better understanding.

There are a few optinal arguments too that the script can take
         [-h <help>]
         [-m <location of the module>]
         [-o <location of the output>]
         [-v <verbose> (run on all cpus)]
Default Output location in: tools/testing/cpuidle/cpuidle.log
Isn't it:

tools/testing/selftests/cpuidle/cpuidle.log

My bad, It was a typing error. I missed the "selftest" directory while
typing it out.

? At least, that is where my file was.

Other notes:

No idle state for CPU 0 ever gets disabled.
I assume this is because CPU 0 can never be offline,
so that bit of code (Disable all stop states) doesn't find its state.
By the way, processor = Intel i5-9600K

I had tried these patches on an IBM POWER 9 processor and disabling CPU0's idle
state works there. However, it does make sense for some processors to treat CPU
0 differently.
Maybe I could write in a case if idle state disabling fails for a CPU then we
just skip it?

The system is left with all idle states disabled, well not for CPU 0
as per the above comment. The suggestion is to restore them,
otherwise my processor hogs 42 watts instead of 2.

My results are highly variable per test.

Question: Do you notice high variability with IPI test, Timer test or both?

I can think of two reasons for high run to run variance:

1. If you observe variance in timer tests, then I believe there could a
mechanism of "C-state pre-wake" on some Intel machines at play here, which can
pre-wake a CPU from an idle state when timers are armed. I'm not sure if the
Intel platform that you're running on does that or not.

Artem had described this behavior to me a while ago and I think his wult page
describes this behavior in more detail:
https://intel.github.io/wult/#c-state-pre-wake

2. I have noticed variability in results when there are kernel book-keeping or
jitter tasks scheduled from time to time on an otherwise idle core.
In the full per-CPU logs at tools/testing/selftests/cpuidle/cpuidle.log can you
spot any obvious outliers per-CPU state?

Also you may want to run the test in verbose mode which runs for all the
threads of each CPU with the following: "sudo ./cpuidle.sh -v". While latency
mostly matters for per-core basis but it still maybe a good idea to see if
that changes anything with the observations.

--
Thanks and regards,
Pratik

My system is very idle:
Example (from turbostat at 6 seconds sample rate):
Busy%   Bzy_MHz IRQ     PkgTmp  PkgWatt RAMWatt
0.03    4600    153     28      2.03    1.89
0.01    4600    103     29      2.03    1.89
0.05    4600    115     29      2.08    1.89
0.01    4600    95      28      2.09    1.89
0.03    4600    114     28      2.11    1.89
0.01    4600    107     29      2.07    1.89
0.02    4600    102     29      2.11    1.89

...

... Doug




[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux