On 6/21/2017 2:16 AM, Thomas Gleixner wrote:
On Fri, 16 Jun 2017, Tom Lendacky wrote:
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index a105796..988b336 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -15,16 +15,24 @@
#ifndef __ASSEMBLY__
+#include <linux/init.h>
+
#ifdef CONFIG_AMD_MEM_ENCRYPT
extern unsigned long sme_me_mask;
+void __init sme_enable(void);
+
#else /* !CONFIG_AMD_MEM_ENCRYPT */
#define sme_me_mask 0UL
+static inline void __init sme_enable(void) { }
+
#endif /* CONFIG_AMD_MEM_ENCRYPT */
+unsigned long sme_get_me_mask(void);
Why is this an unconditional function? Isn't the mask simply 0 when the MEM
ENCRYPT support is disabled?
I made it unconditional because of the call from head_64.S. I can't make
use of the C level static inline function and since the mask is not a
variable if CONFIG_AMD_MEM_ENCRYPT is not configured (#defined to 0) I
can't reference the variable directly.
I could create a #define in head_64.S that changes this to load rax with
the variable if CONFIG_AMD_MEM_ENCRYPT is configured or a zero if it's
not or add a #ifdef at that point in the code directly. Thoughts on
that?
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 6225550..ef12729 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -78,7 +78,29 @@ startup_64:
call __startup_64
popq %rsi
- movq $(early_top_pgt - __START_KERNEL_map), %rax
+ /*
+ * Encrypt the kernel if SME is active.
+ * The real_mode_data address is in %rsi and that register can be
+ * clobbered by the called function so be sure to save it.
+ */
+ push %rsi
+ call sme_encrypt_kernel
+ pop %rsi
That does not make any sense. Neither the call to sme_encrypt_kernel() nor
the following call to sme_get_me_mask().
__startup_64() is already C code, so why can't you simply call that from
__startup_64() in C and return the mask from there?
I was trying to keep it explicit as to what was happening, but I can
move those calls into __startup_64(). I'll still need the call to
sme_get_me_mask() in the secondary_startup_64 path, though (depending on
your thoughts to the above response).
@@ -98,7 +120,20 @@ ENTRY(secondary_startup_64)
/* Sanitize CPU configuration */
call verify_cpu
- movq $(init_top_pgt - __START_KERNEL_map), %rax
+ /*
+ * Get the SME encryption mask.
+ * The encryption mask will be returned in %rax so we do an ADD
+ * below to be sure that the encryption mask is part of the
+ * value that will stored in %cr3.
+ *
+ * The real_mode_data address is in %rsi and that register can be
+ * clobbered by the called function so be sure to save it.
+ */
+ push %rsi
+ call sme_get_me_mask
+ pop %rsi
Do we really need a call here? The mask is established at this point, so
it's either 0 when the encryption stuff is not compiled in or it can be
retrieved from a variable which is accessible at this point.
Same as above, this can be updated based on the decided approach.
Thanks,
Tom
+
+ addq $(init_top_pgt - __START_KERNEL_map), %rax
1:
/* Enable PAE mode, PGE and LA57 */
Thanks,
tglx