This is a note to let you know that I've just added the patch titled x86/mem_encrypt: Unbreak the AMD_MEM_ENCRYPT=n build to the 6.4-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: x86-mem_encrypt-unbreak-the-amd_mem_encrypt-n-build.patch and it can be found in the queue-6.4 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 0a9567ac5e6a40cdd9c8cd15b19a62a15250f450 Mon Sep 17 00:00:00 2001 From: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Date: Fri, 16 Jun 2023 22:15:31 +0200 Subject: x86/mem_encrypt: Unbreak the AMD_MEM_ENCRYPT=n build From: Thomas Gleixner <tglx@xxxxxxxxxxxxx> commit 0a9567ac5e6a40cdd9c8cd15b19a62a15250f450 upstream. Moving mem_encrypt_init() broke the AMD_MEM_ENCRYPT=n because the declaration of that function was under #ifdef CONFIG_AMD_MEM_ENCRYPT and the obvious placement for the inline stub was the #else path. This is a leftover of commit 20f07a044a76 ("x86/sev: Move common memory encryption code to mem_encrypt.c") which made mem_encrypt_init() depend on X86_MEM_ENCRYPT without moving the prototype. That did not fail back then because there was no stub inline as the core init code had a weak function. Move both the declaration and the stub out of the CONFIG_AMD_MEM_ENCRYPT section and guard it with CONFIG_X86_MEM_ENCRYPT. Fixes: 439e17576eb4 ("init, x86: Move mem_encrypt_init() into arch_cpu_finalize_init()") Reported-by: kernel test robot <lkp@xxxxxxxxx> Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Closes: https://lore.kernel.org/oe-kbuild-all/202306170247.eQtCJPE8-lkp@xxxxxxxxx/ Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/x86/include/asm/mem_encrypt.h | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -17,6 +17,12 @@ #include <asm/bootparam.h> +#ifdef CONFIG_X86_MEM_ENCRYPT +void __init mem_encrypt_init(void); +#else +static inline void mem_encrypt_init(void) { } +#endif + #ifdef CONFIG_AMD_MEM_ENCRYPT extern u64 sme_me_mask; @@ -51,8 +57,6 @@ void __init mem_encrypt_free_decrypted_m void __init sev_es_init_vc_handling(void); -void __init mem_encrypt_init(void); - #define __bss_decrypted __section(".bss..decrypted") #else /* !CONFIG_AMD_MEM_ENCRYPT */ @@ -85,8 +89,6 @@ early_set_mem_enc_dec_hypercall(unsigned static inline void mem_encrypt_free_decrypted_mem(void) { } -static inline void mem_encrypt_init(void) { } - #define __bss_decrypted #endif /* CONFIG_AMD_MEM_ENCRYPT */ Patches currently in stable-queue which might be from tglx@xxxxxxxxxxxxx are queue-6.4/x86-cpu-switch-to-arch_cpu_finalize_init.patch queue-6.4/arm-cpu-switch-to-arch_cpu_finalize_init.patch queue-6.4/um-cpu-switch-to-arch_cpu_finalize_init.patch queue-6.4/mips-cpu-switch-to-arch_cpu_finalize_init.patch queue-6.4/x86-mem_encrypt-unbreak-the-amd_mem_encrypt-n-build.patch queue-6.4/init-x86-move-mem_encrypt_init-into-arch_cpu_finalize_init.patch queue-6.4/sh-cpu-switch-to-arch_cpu_finalize_init.patch queue-6.4/init-invoke-arch_cpu_finalize_init-earlier.patch queue-6.4/x86-xen-fix-secondary-processors-fpu-initialization.patch queue-6.4/x86-fpu-move-fpu-initialization-into-arch_cpu_finalize_init.patch queue-6.4/loongarch-cpu-switch-to-arch_cpu_finalize_init.patch queue-6.4/init-remove-check_bugs-leftovers.patch queue-6.4/init-provide-arch_cpu_finalize_init.patch queue-6.4/m68k-cpu-switch-to-arch_cpu_finalize_init.patch queue-6.4/x86-init-initialize-signal-frame-size-late.patch queue-6.4/sparc-cpu-switch-to-arch_cpu_finalize_init.patch queue-6.4/x86-fpu-mark-init-functions-__init.patch queue-6.4/ia64-cpu-switch-to-arch_cpu_finalize_init.patch queue-6.4/x86-fpu-remove-cpuinfo-argument-from-init-functions.patch