Re: [PATCH v10 06/50] x86/sev: Add the host SEV-SNP initialization support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/7/23 10:31, Borislav Petkov wrote:
On Mon, Oct 16, 2023 at 08:27:35AM -0500, Michael Roth wrote:
+static bool early_rmptable_check(void)
+{
+	u64 rmp_base, rmp_size;
+
+	/*
+	 * For early BSP initialization, max_pfn won't be set up yet, wait until
+	 * it is set before performing the RMP table calculations.
+	 */
+	if (!max_pfn)
+		return true;

This already says that this is called at the wrong point during init.

(Just addressing some of your comments, Mike to address others)

I commented earlier that we can remove this check and then it becomes purely a check for whether the RMP table has been pre-allocated by the BIOS. It is done early here in order to allow for AutoIBRS to be used as a Spectre mitigation. If an RMP table has not been allocated by BIOS then the SNP feature can be cleared, allowing AutoIBRS to be used, if available.


Right now we have

early_identify_cpu -> early_init_amd -> early_detect_mem_encrypt

which runs only on the BSP but then early_init_amd() is called in
init_amd() too so that it takes care of the APs too.

Which ends up doing a lot of unnecessary work on each AP in
early_detect_mem_encrypt() like calculating the RMP size on each AP
unnecessarily where this needs to happen exactly once.

Is there any reason why this function cannot be moved to init_amd()
where it'll do the normal, per-AP init?

It needs to be called early enough to allow for AutoIBRS to not be disabled just because SNP is supported. By calling it where it is currently called, the SNP feature can be cleared if, even though supported, SNP can't be used, allowing AutoIBRS to be used as a more performant Spectre mitigation.


And the stuff that needs to happen once, needs to be called once too.

+
+	return snp_get_rmptable_info(&rmp_base, &rmp_size);
+}
+
  static void early_detect_mem_encrypt(struct cpuinfo_x86 *c)
  {
  	u64 msr;
@@ -659,6 +674,9 @@ static void early_detect_mem_encrypt(struct cpuinfo_x86 *c)
  		if (!(msr & MSR_K7_HWCR_SMMLOCK))
  			goto clear_sev;
+ if (cpu_has(c, X86_FEATURE_SEV_SNP) && !early_rmptable_check())
+			goto clear_snp;
+
  		return;
clear_all:
@@ -666,6 +684,7 @@ static void early_detect_mem_encrypt(struct cpuinfo_x86 *c)
  clear_sev:
  		setup_clear_cpu_cap(X86_FEATURE_SEV);
  		setup_clear_cpu_cap(X86_FEATURE_SEV_ES);
+clear_snp:
  		setup_clear_cpu_cap(X86_FEATURE_SEV_SNP);
  	}
  }

...

+bool snp_get_rmptable_info(u64 *start, u64 *len)
+{
+	u64 max_rmp_pfn, calc_rmp_sz, rmp_sz, rmp_base, rmp_end;
+
+	rdmsrl(MSR_AMD64_RMP_BASE, rmp_base);
+	rdmsrl(MSR_AMD64_RMP_END, rmp_end);
+
+	if (!(rmp_base & RMP_ADDR_MASK) || !(rmp_end & RMP_ADDR_MASK)) {
+		pr_err("Memory for the RMP table has not been reserved by BIOS\n");
+		return false;
+	}

If you're masking off bits 0-12 above...

Because the RMP_END MSR, most specifically, has a default value of 0x1fff, where bits [12:0] are reserved. So to specifically check if the MSR has been set to a non-zero end value, the bit are masked off. However, ...


+
+	if (rmp_base > rmp_end) {

... why aren't you using the masked out vars further on?

... the full values can be used once they have been determined to not be zero.


I know, the hw will say, yeah, those bits are 0 but still. IOW, do:

	rmp_base &= RMP_ADDR_MASK;
	rmp_end  &= RMP_ADDR_MASK;

after reading them.

You can't for RMP_END since it will always have bits 12:0 set to one and you shouldn't clear them once you know that the MSR has truly been set.

Thanks,
Tom






[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux