Am 24.05.19 um 12:37 schrieb Thomas Hellstrom: > [CAUTION: External Email] > > On 5/24/19 12:18 PM, Koenig, Christian wrote: >> Am 24.05.19 um 11:55 schrieb Thomas Hellstrom: >>> [CAUTION: External Email] >>> >>> On 5/24/19 11:11 AM, Thomas Hellstrom wrote: >>>> Hi, Christian, >>>> >>>> On 5/24/19 10:37 AM, Koenig, Christian wrote: >>>>> Am 24.05.19 um 10:11 schrieb Thomas Hellström (VMware): >>>>>> [CAUTION: External Email] >>>>>> >>>>>> From: Thomas Hellstrom <thellstrom@xxxxxxxxxx> >>>>>> >>>>>> With SEV encryption, all DMA memory must be marked decrypted >>>>>> (AKA "shared") for devices to be able to read it. In the future we >>>>>> might >>>>>> want to be able to switch normal (encrypted) memory to decrypted in >>>>>> exactly >>>>>> the same way as we handle caching states, and that would require >>>>>> additional >>>>>> memory pools. But for now, rely on memory allocated with >>>>>> dma_alloc_coherent() which is already decrypted with SEV enabled. >>>>>> Set up >>>>>> the page protection accordingly. Drivers must detect SEV enabled and >>>>>> switch >>>>>> to the dma page pool. >>>>>> >>>>>> This patch has not yet been tested. As a follow-up, we might want to >>>>>> cache decrypted pages in the dma page pool regardless of their >>>>>> caching >>>>>> state. >>>>> This patch is unnecessary, SEV support already works fine with at >>>>> least >>>>> amdgpu and I would expect that it also works with other drivers as >>>>> well. >>>>> >>>>> Also see this patch: >>>>> >>>>> commit 64e1f830ea5b3516a4256ed1c504a265d7f2a65c >>>>> Author: Christian König <christian.koenig@xxxxxxx> >>>>> Date: Wed Mar 13 10:11:19 2019 +0100 >>>>> >>>>> drm: fallback to dma_alloc_coherent when memory encryption is >>>>> active >>>>> >>>>> We can't just map any randome page we get when memory >>>>> encryption is >>>>> active. >>>>> >>>>> Signed-off-by: Christian König <christian.koenig@xxxxxxx> >>>>> Acked-by: Alex Deucher <alexander.deucher@xxxxxxx> >>>>> Link: https://patchwork.kernel.org/patch/10850833/ >>>>> >>>>> Regards, >>>>> Christian. >>>> Yes, I noticed that. Although I fail to see where we automagically >>>> clear the PTE encrypted bit when mapping coherent memory? For the >>>> linear kernel map, that's done within dma_alloc_coherent() but for >>>> kernel vmaps and and user-space maps? Is that done automatically by >>>> the x86 platform layer? >> Yes, I think so. Haven't looked to closely at this either. > > This sounds a bit odd. If that were the case, the natural place would be > the PAT tracking code, but it only handles caching flags AFAICT. Not > encryption flags. > > But when you tested AMD with SEV, was that running as hypervisor rather > than a guest, or did you run an SEV guest with PCI passthrough to the > AMD device? Yeah, well the problem is we never tested this ourself :) > >> >>>> /Thomas >>>> >>> And, as a follow up question, why do we need dma_alloc_coherent() when >>> using SME? I thought the hardware performs the decryption when DMA-ing >>> to / from an encrypted page with SME, but not with SEV? >> I think the issue was that the DMA API would try to use a bounce buffer >> in this case. > > SEV forces SWIOTLB bouncing on, but not SME. So it should probably be > possible to avoid dma_alloc_coherent() in the SME case. In this case I don't have an explanation for this. For the background what happened is that we got reports that SVE/SME doesn't work with amdgpu. So we told the people to try using the dma_alloc_coherent() path and that worked fine. Because of this we came up with the patch I noted earlier. I can confirm that it indeed works now for a couple of users, but we still don't have a test system for this in our team. Christian. > > /Thomas > > >> >> Christian. >> >>> Thanks, Thomas >>> >>> >>> > _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel