Re: [RFC PATCH] drm/ttm, drm/vmwgfx: Have TTM support AMD SEV encryption

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 28.05.19 um 16:48 schrieb Lendacky, Thomas:
On 5/28/19 2:31 AM, Thomas Hellstrom wrote:
Hi, Tom,

Could you shed some light on this?
I don't have a lot of GPU knowledge, so let me start with an overview of
how everything should work and see if that answers the questions being
asked.

First, SME:
The encryption bit is bit-47 of a physical address. So, if a device does
not support at least 48-bit DMA, it will have to use the SWIOTLB and
bounce buffer the data. This is handled automatically if the driver is
using the Linux DMA-api as all of SWIOTLB has been marked un-encrypted.
Data is bounced between the un-encrypted SWIOTLB and the (presumably)
encrypted area of the driver.

Ok, that explains why we don't need to manually handle the encryption bit in TTM.

For SEV:
The encryption bit position is the same as SME. However, with SEV all
DMA must use an un-encrypted area so all DMA goes through SWIOTLB. Just
like SME, this is handled automatically if the driver is using the Linux
DMA-api as all of SWIOTLB has been marked un-encrypted. And just like SME,
data is bounced between the un-encrypted SWIOTLB and the (presumably)
encrypted area of the driver.

There is an optimization for dma_alloc_coherent() where the pages are
allocated and marked un-encrypted, thus avoiding the bouncing (see file
kernel/dma/direct.c, dma_direct_alloc_pages()).

And that explains why we have to use dma_alloc_coherent()....

As for kernel vmaps and user-maps, those pages will be marked encrypted
(unless explicitly made un-encrypted by calling set_memory_decrypted()).
But, if you are copying to/from those areas into the un-encrypted DMA
area then everything will be ok.

Things get fuzzy for me when it comes to the GPU access of the memory
and what and how it is accessed.

I can fill in those lose ends, but it's probably going to be a long mail and a bit off topic.

Thanks for filling in how that stuff actually works. And yeah, as far as I can see we actually don't need to do anything.

The only way to get un-encrypted pages which don't bounce through SWIOTLB is using dma_alloc_coherent().

It's probably a bit unfortunate that TTM can't control it, but I think Thomas has to live with that. The use case for sharing encrypted pages is probably not so common anyway.

Thanks,
Christian.


Thanks,
Tom

Thanks,
Thomas


On 5/24/19 5:08 PM, Alex Deucher wrote:
+ Tom

He's been looking into SEV as well.

On Fri, May 24, 2019 at 8:30 AM Thomas Hellstrom <thomas@xxxxxxxxxxxx>
wrote:
On 5/24/19 2:03 PM, Koenig, Christian wrote:
Am 24.05.19 um 12:37 schrieb Thomas Hellstrom:
[CAUTION: External Email]

On 5/24/19 12:18 PM, Koenig, Christian wrote:
Am 24.05.19 um 11:55 schrieb Thomas Hellstrom:
[CAUTION: External Email]

On 5/24/19 11:11 AM, Thomas Hellstrom wrote:
Hi, Christian,

On 5/24/19 10:37 AM, Koenig, Christian wrote:
Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware):
[CAUTION: External Email]

From: Thomas Hellstrom <thellstrom@xxxxxxxxxx>

With SEV encryption, all DMA memory must be marked decrypted
(AKA "shared") for devices to be able to read it. In the future we
might
want to be able to switch normal (encrypted) memory to decrypted in
exactly
the same way as we handle caching states, and that would require
additional
memory pools. But for now, rely on memory allocated with
dma_alloc_coherent() which is already decrypted with SEV enabled.
Set up
the page protection accordingly. Drivers must detect SEV enabled
and
switch
to the dma page pool.

This patch has not yet been tested. As a follow-up, we might
want to
cache decrypted pages in the dma page pool regardless of their
caching
state.
This patch is unnecessary, SEV support already works fine with at
least
amdgpu and I would expect that it also works with other drivers as
well.

Also see this patch:

commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c
Author: Christian König <christian.koenig@xxxxxxx>
Date:   Wed Mar 13 10:11:19 2019 +0100

          drm: fallback to dma_alloc_coherent when memory
encryption is
active

          We can't just map any randome page we get when memory
encryption is
          active.

          Signed-off-by: Christian König <christian.koenig@xxxxxxx>
          Acked-by: Alex Deucher <alexander.deucher@xxxxxxx>
          Link: https://patchwork.kernel.org/patch/10850833/

Regards,
Christian.
Yes, I noticed that. Although I fail to see where we automagically
clear the PTE encrypted bit when mapping coherent memory? For the
linear kernel map, that's done within dma_alloc_coherent() but for
kernel vmaps and and user-space maps? Is that done automatically by
the x86 platform layer?
Yes, I think so. Haven't looked to closely at this either.
This sounds a bit odd. If that were the case, the natural place would be
the PAT tracking code, but it only handles caching flags AFAICT. Not
encryption flags.

But when you tested AMD with SEV, was that running as hypervisor rather
than a guest, or did you run an SEV guest with PCI passthrough to the
AMD device?
Yeah, well the problem is we never tested this ourself :)

/Thomas

And, as a follow up question, why do we need dma_alloc_coherent() when
using SME? I thought the hardware performs the decryption when DMA-ing
to / from an encrypted page with SME, but not with SEV?
I think the issue was that the DMA API would try to use a bounce buffer
in this case.
SEV forces SWIOTLB bouncing on, but not SME. So it should probably be
possible to avoid dma_alloc_coherent() in the SME case.
In this case I don't have an explanation for this.

For the background what happened is that we got reports that SVE/SME
doesn't work with amdgpu. So we told the people to try using the
dma_alloc_coherent() path and that worked fine. Because of this we came
up with the patch I noted earlier.

I can confirm that it indeed works now for a couple of users, but we
still don't have a test system for this in our team.

Christian.
OK, undestood,

But unless there is some strange magic going on, (which there might be
of course),I do think the patch I sent is correct, and the reason that
SEV works is that the AMD card is used by the hypervisor and not the
guest, and TTM is actually incorrectly creating conflicting maps and
treating the coherent memory as encrypted. But since the memory is only
accessed through encrypted PTEs, the hardware does the right thing,
using the hypervisor key for decryption....

But that's only a guess, and this is not super-urgent. I will be able to
follow up if / when we bring vmwgfx up for SEV.

/Thomas

/Thomas


Christian.

Thanks, Thomas



_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux