Re: Runtime Memory Validation in Intel-TDX and AMD-SNP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 7/20/2021 2:11 AM, Joerg Roedel wrote:

I am not sure how it is implemented in TDX hardware, but for SNP the
guest _must_ _not_ double-validate or even double-invalidate memory.


In TDX it just zeroes the data. If you can tolerate zeroing it's fine. Of course for most data that's not tolerable, but for kexec (minus kernel itself) it is.



What I sent here is actually v2 of my proposal, v1 had a much more lazy
approach like you are proposing here. But as I learned what can happen
is this:

	* Hypervisor maps GPA X to HPA A
	* Guest validates GPA X
	  Hardware enforces that HPA A always maps to GPA X
	* Hypervisor remaps GPA X to HPA B
	* Guest lazily re-validates GPA X
	  Hardware enforces that HPA B always maps to GPA X
	
The situation we have now is that host pages A and B are validated for
the same guest page, and the hypervisor can switch between them at will,
without the guest being able to notice it.


I don't believe that's possible on TDX


This can open various attack vectors from the hypervisor towards the
guest, like tricking the guest into a code-path where it accidentially
reveals its secrets.

Well things would certainly easier if you had a purge interface then.

But for the kexec crash case it would be just attacks against the crash dump, which I assume are not a real security concern. The crash kexec mostly runs in its own memory, which doesn't need this, or is small enough that it can be fully pre-accepted. And for the previous memory view probably these issues are acceptable.

That leaves the non crash kexec case, but perhaps it is acceptable to just restart the guest in such a case instead of creating complicated and fragile new interfaces.


If the device filter is active it won't.
We are not going to pohibit dma_alloc_coherent() in SNP guests just
because we are too lazy to implement memory re-validation.


dma_alloc_coherent is of course allowed, just not freeing. Or rather if you free you would need a pool to recycle there.

If you have anything that free coherent dma frequently the performance would be terrible so you should probably avoid that at all costs anyways.

But since pretty much all the current IO models rely on a small number of static bounce buffers that's not a problem.

-Andi





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux