Re: [PATCH v13 2/5] riscv: Add static key for misaligned accesses

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 21, 2023 at 7:38 AM Charlie Jenkins <charlie@xxxxxxxxxxxx> wrote:
>
> Support static branches depending on the value of misaligned accesses.
> This will be used by a later patch in the series. All cpus must be
> considered "fast" for this static branch to be flipped.
>
> Signed-off-by: Charlie Jenkins <charlie@xxxxxxxxxxxx>
> ---
>  arch/riscv/include/asm/cpufeature.h |  2 ++
>  arch/riscv/kernel/cpufeature.c      | 30 ++++++++++++++++++++++++++++++
>  2 files changed, 32 insertions(+)
>
> diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
> index a418c3112cd6..7b129e5e2f07 100644
> --- a/arch/riscv/include/asm/cpufeature.h
> +++ b/arch/riscv/include/asm/cpufeature.h
> @@ -133,4 +133,6 @@ static __always_inline bool riscv_cpu_has_extension_unlikely(int cpu, const unsi
>         return __riscv_isa_extension_available(hart_isa[cpu].isa, ext);
>  }
>
> +DECLARE_STATIC_KEY_FALSE(fast_misaligned_access_speed_key);
> +
>  #endif
> diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
> index b3785ffc1570..095eb6ebdcaa 100644
> --- a/arch/riscv/kernel/cpufeature.c
> +++ b/arch/riscv/kernel/cpufeature.c
> @@ -10,6 +10,7 @@
>  #include <linux/bitmap.h>
>  #include <linux/cpuhotplug.h>
>  #include <linux/ctype.h>
> +#include <linux/jump_label.h>
>  #include <linux/log2.h>
>  #include <linux/memory.h>
>  #include <linux/module.h>
> @@ -728,6 +729,35 @@ void riscv_user_isa_enable(void)
>                 csr_set(CSR_SENVCFG, ENVCFG_CBZE);
>  }
>
> +DEFINE_STATIC_KEY_FALSE(fast_misaligned_access_speed_key);
> +
> +static int set_unaligned_access_static_branches(void)
> +{
> +       /*
> +        * This will be called after check_unaligned_access_all_cpus so the
> +        * result of unaligned access speed for all cpus will be available.
> +        */
> +
> +       int cpu;
> +       bool fast_misaligned_access_speed = true;
> +
> +       for_each_online_cpu(cpu) {
Each online_cpu? Is there any offline_cpu that is no
fast_misaligned_access_speed?

Move into your riscv_online_cpu for each CPU, and use stop_machine for
synchronization.

> +               int this_perf = per_cpu(misaligned_access_speed, cpu);
> +
> +               if (this_perf != RISCV_HWPROBE_MISALIGNED_FAST) {
> +                       fast_misaligned_access_speed = false;
> +                       break;
> +               }
> +       }
> +
> +       if (fast_misaligned_access_speed)
> +               static_branch_enable(&fast_misaligned_access_speed_key);
> +
> +       return 0;
> +}
> +
> +arch_initcall_sync(set_unaligned_access_static_branches);
> +
>  #ifdef CONFIG_RISCV_ALTERNATIVE
>  /*
>   * Alternative patch sites consider 48 bits when determining when to patch
>
> --
> 2.43.0
>
>


-- 
Best Regards
 Guo Ren





[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux