Re: [PATCH v2 1/1] ACPI: CPPC: Disable FIE if registers in PCC regions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 8/10/22 13:51, Ionela Voinescu wrote:
Hi folks,

On Wednesday 10 Aug 2022 at 13:29:08 (+0100), Lukasz Luba wrote:
Hi Jeremy,

+CC Valentin since he might be interested in this finding
+CC Ionela, Dietmar

I have a few comments for this patch.


On 7/28/22 23:10, Jeremy Linton wrote:
PCC regions utilize a mailbox to set/retrieve register values used by
the CPPC code. This is fine as long as the operations are
infrequent. With the FIE code enabled though the overhead can range
from 2-11% of system CPU overhead (ex: as measured by top) on Arm
based machines.

So, before enabling FIE assure none of the registers used by
cppc_get_perf_ctrs() are in the PCC region. Furthermore lets also
enable a module parameter which can also disable it at boot or module
reload.

Signed-off-by: Jeremy Linton <jeremy.linton@xxxxxxx>
---
   drivers/acpi/cppc_acpi.c       | 41 ++++++++++++++++++++++++++++++++++
   drivers/cpufreq/cppc_cpufreq.c | 19 ++++++++++++----
   include/acpi/cppc_acpi.h       |  5 +++++
   3 files changed, 61 insertions(+), 4 deletions(-)


1. You assume that all platforms would have this big overhead when
    they have the PCC regions for this purpose.
    Do we know which version of HW mailbox have been implemented
    and used that have this 2-11% overhead in a platform?
    Do also more recent MHU have such issues, so we could block
    them by default (like in your code)?

2. I would prefer to simply change the default Kconfig value to 'n' for
    the ACPI_CPPC_CPUFREQ_FIE, instead of creating a runtime
    check code which disables it.
    We have probably introduce this overhead for older platforms with
    this commit:

commit 4c38f2df71c8e33c0b64865992d693f5022eeaad
Author: Viresh Kumar <viresh.kumar@xxxxxxxxxx>
Date:   Tue Jun 23 15:49:40 2020 +0530

     cpufreq: CPPC: Add support for frequency invariance



If the test server with this config enabled performs well
in the stress-tests, then on production server the config may be
set to 'y' (or 'm' and loaded).

I would vote to not add extra code, which then after a while might be
decided to bw extended because actually some HW is actually capable (so
we could check in runtime and enable it). IMO this create an additional
complexity in our diverse configuration/tunnable space in our code.


I agree that having CONFIG_ACPI_CPPC_CPUFREQ_FIE default to no is the
simpler solution but it puts the decision in the hands of platform
providers which might result in this functionality not being used most
of the times, if at all. This being said, the use of CPPC counters is
meant as a last resort for FIE, if the platform does not have AMUs. This
is why I recommended this to default to no in the review of the original
patches.

But I don't see these runtime options as adding a lot of complexity
and therefore agree with the idea of this patch, versus the config
change above, with two design comments:
  - Rather than having a check for fie_disabled in multiple init and exit
    functions I think the code should be slightly redesigned to elegantly
    bail out of most functions if cppc_freq_invariance_init() failed.
  - Given the multiple options to disable this functionality (config,
    PCC check), I don't see a need for a module parameter or runtime user
    input, unless we make that overwrite all previous decisions, as in: if
    CONFIG_ACPI_CPPC_CPUFREQ_FIE=y, even if cppc_perf_ctrs_in_pcc(), if
    the fie_disabled module parameter is no, then counters should be used
    for FIE.


A few things:
1. With this default CONFIG_ACPI_CPPC_CPUFREQ_FIE=y we've introduced
a performance regression on older HW servers, which is not good IMO.
It looks like it wasn't a good idea. The FIE which is used in a tick
and going through mailbox and FW sounds like a bad design.
You need to have a really fast HW mailbox, FW and uC running it,
to be able to provide a decent performance.
2. Keeping a code which is not used in a server because at runtime we
discover this PCC overhead issue doesn't make sense.
3. System integrator or distro engineers should be able to experiment
with different kernel config options on the platform and disable/
enable this option on particular server. I am afraid that we cannot
figure out and assume performance at runtime in this code and say
it would be good or not to use it. Only stress-tests would tell this.



[Index of Archives]     [Linux IBM ACPI]     [Linux Power Management]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux