Re: [virtio-dev] [PATCH 2/2] drm/virtio: add iommu support.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I tested this with Gerd's qemu side fix with in the sirius/virtio-gpu-iommu
branch https://git.kraxel.org/cgit/qemu/commit/?h=sirius/virtio-gpu-iommu&id=86f9d30e4a44ca47f007e51ab5744f87e79fb83e

While this resolves the issue where iommu_platform attribute can be specified
for virtio-gpu device in qemu and the virtqueue vring goes through dma-api
swiotlb bounce buffer which is what we want for AMD SEV since DMA bounce
buffer is set as decrypted, I still observe the issue where for AMD SEV,
the guest graphical display is all garbage.  I see that the frame buffer
ttm-pages from guest go through dma map api in this patch, but it looks like
the other path where virtio_gpufb_create() calls virtio_gpu_vmap_fb() after
virtio_gpu_alloc_object() that creates the ttm->pages.

The virtio_gpu_vmap_fb() calls down ttm_bo_kmap and eventual vmap() to get
a guest virtual address.  It is in the PTE of this vmap mapping that I need
to call set_memory_decrypted() to get the graphical working in guest without
seeing garbage on the display.

virtio_gpu_alloc_object()
    virtio_gpu_object_create()
        sturct virtio_gpu_object bo kzalloc()
            ttm_bo_init()
            ...
                ttm_tt_create()
                    bo->ttm = bdev->driver->ttm_tt_create()
                        virtio_gpu_ttm_tt_create()
                        ...
                        ttm_tt_populate()
                            ttm_pool_populate()
                                ttm_get_pages(ttm->pages, ttm->num_pages)

virtio_gpu_vmap_fb()
    virtio_gpu_object_kmap(obj, NULL)
       ttm_bo_kmap
          ttm_mem_io_reserve()
             ttm_bo_kmap_ttm()
                vmap()
             struct virtio_gpu_object bo->vmap = 
ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);


There is a separate userspace path for example when displace manager 
kicks in,
virtio_gpu_alloc_object() followed by virtio_gpu_object_attach() are 
called through
the ioctl virtio_gpu_resource_create_ioctl().  The mapping of the 
pages created in this
page are handled in the do_page_fault() path in ttm_bo_vm_ops's 
ttm_bo_vm_fault() handler.

do_page_fault()
    handle_mm_fault()
       __do_page_fault()
          ttm_bo_vm_fault()
             ttm_bo_reserve()
                __ttm_bo_reserve()
             ttm_mem_io_reserve_vm()
                ttm_mem_io_reserve()
                   bdev->driver->io_mem_reserve()
                      virtio_gpu_ttm_io_mem_reserve()
                         mem->bus.is_iomem = false
                         mem->mem_type == TTM_PL_TT

The prot for the page mapping here also needs fix to mark the it as decrypted.

With these two spot fixes and with this patch and the QEMU side fix, able to
boot guest with graphical display with AMD SEV without seeing garbage on the
display.

I attempted to fix it in the ttm layer and here is the discussion
https://lore.kernel.org/lkml/b44280d7-eb13-0996-71f5-3fbdeb466801@xxxxxxx/

The ttm maintainer Christian is suggesting to map and set ttm->pages as decrypted
right after ttm->pages are allocated.

Just checking with you guys maybe there is a better way to handle this in
the virtio gpu layer instead of the common ttm layer.  Could you guys shine some
light on this?  Thanks.

- Jiandi

On 09/03/2018 06:50 PM, Dave Airlie wrote:
> For the series,
> 
> Reviewed-by: Dave Airlie <airlied@xxxxxxxxxx>
> On Wed, 29 Aug 2018 at 22:20, Gerd Hoffmann <kraxel@xxxxxxxxxx> wrote:
>>
>> Use the dma mapping api and properly add iommu mappings for
>> objects, unless virtio is in iommu quirk mode.
>>
>> Signed-off-by: Gerd Hoffmann <kraxel@xxxxxxxxxx>
>> ---
>>  drivers/gpu/drm/virtio/virtgpu_drv.h |  1 +
>>  drivers/gpu/drm/virtio/virtgpu_vq.c  | 46 +++++++++++++++++++++++++++++-------
>>  2 files changed, 38 insertions(+), 9 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
>> index cbbff01077..ec9a38f995 100644
>> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
>> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
>> @@ -57,6 +57,7 @@ struct virtio_gpu_object {
>>         uint32_t hw_res_handle;
>>
>>         struct sg_table *pages;
>> +       uint32_t mapped;
>>         void *vmap;
>>         bool dumb;
>>         struct ttm_place                placement_code;
>> diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c
>> index af24e91267..bf631d32d4 100644
>> --- a/drivers/gpu/drm/virtio/virtgpu_vq.c
>> +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c
>> @@ -424,7 +424,8 @@ void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
>>  }
>>
>>  static void virtio_gpu_cmd_resource_inval_backing(struct virtio_gpu_device *vgdev,
>> -                                                 uint32_t resource_id)
>> +                                                 uint32_t resource_id,
>> +                                                 struct virtio_gpu_fence **fence)
>>  {
>>         struct virtio_gpu_resource_detach_backing *cmd_p;
>>         struct virtio_gpu_vbuffer *vbuf;
>> @@ -435,7 +436,7 @@ static void virtio_gpu_cmd_resource_inval_backing(struct virtio_gpu_device *vgde
>>         cmd_p->hdr.type = cpu_to_le32(VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING);
>>         cmd_p->resource_id = cpu_to_le32(resource_id);
>>
>> -       virtio_gpu_queue_ctrl_buffer(vgdev, vbuf);
>> +       virtio_gpu_queue_fenced_ctrl_buffer(vgdev, vbuf, &cmd_p->hdr, fence);
>>  }
>>
>>  void virtio_gpu_cmd_set_scanout(struct virtio_gpu_device *vgdev,
>> @@ -848,9 +849,10 @@ int virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
>>                              uint32_t resource_id,
>>                              struct virtio_gpu_fence **fence)
>>  {
>> +       bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
>>         struct virtio_gpu_mem_entry *ents;
>>         struct scatterlist *sg;
>> -       int si;
>> +       int si, nents;
>>
>>         if (!obj->pages) {
>>                 int ret;
>> @@ -860,23 +862,33 @@ int virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
>>                         return ret;
>>         }
>>
>> +       if (use_dma_api) {
>> +               obj->mapped = dma_map_sg(vgdev->vdev->dev.parent,
>> +                                        obj->pages->sgl, obj->pages->nents,
>> +                                        DMA_TO_DEVICE);
>> +               nents = obj->mapped;
>> +       } else {
>> +               nents = obj->pages->nents;
>> +       }
>> +
>>         /* gets freed when the ring has consumed it */
>> -       ents = kmalloc_array(obj->pages->nents,
>> -                            sizeof(struct virtio_gpu_mem_entry),
>> +       ents = kmalloc_array(nents, sizeof(struct virtio_gpu_mem_entry),
>>                              GFP_KERNEL);
>>         if (!ents) {
>>                 DRM_ERROR("failed to allocate ent list\n");
>>                 return -ENOMEM;
>>         }
>>
>> -       for_each_sg(obj->pages->sgl, sg, obj->pages->nents, si) {
>> -               ents[si].addr = cpu_to_le64(sg_phys(sg));
>> +       for_each_sg(obj->pages->sgl, sg, nents, si) {
>> +               ents[si].addr = cpu_to_le64(use_dma_api
>> +                                           ? sg_dma_address(sg)
>> +                                           : sg_phys(sg));
>>                 ents[si].length = cpu_to_le32(sg->length);
>>                 ents[si].padding = 0;
>>         }
>>
>>         virtio_gpu_cmd_resource_attach_backing(vgdev, resource_id,
>> -                                              ents, obj->pages->nents,
>> +                                              ents, nents,
>>                                                fence);
>>         obj->hw_res_handle = resource_id;
>>         return 0;
>> @@ -885,7 +897,23 @@ int virtio_gpu_object_attach(struct virtio_gpu_device *vgdev,
>>  void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
>>                               struct virtio_gpu_object *obj)
>>  {
>> -       virtio_gpu_cmd_resource_inval_backing(vgdev, obj->hw_res_handle);
>> +       bool use_dma_api = !virtio_has_iommu_quirk(vgdev->vdev);
>> +       struct virtio_gpu_fence *fence;
>> +
>> +       if (use_dma_api && obj->mapped) {
>> +               /* detach backing and wait for the host process it ... */
>> +               virtio_gpu_cmd_resource_inval_backing(vgdev, obj->hw_res_handle, &fence);
>> +               dma_fence_wait(&fence->f, true);
>> +               dma_fence_put(&fence->f);
>> +
>> +               /* ... then tear down iommu mappings */
>> +               dma_unmap_sg(vgdev->vdev->dev.parent,
>> +                            obj->pages->sgl, obj->mapped,
>> +                            DMA_TO_DEVICE);
>> +               obj->mapped = 0;
>> +       } else {
>> +               virtio_gpu_cmd_resource_inval_backing(vgdev, obj->hw_res_handle, NULL);
>> +       }
>>  }
>>
>>  void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
>> --
>> 2.9.3
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: virtio-dev-unsubscribe@xxxxxxxxxxxxxxxxxxxx
>> For additional commands, e-mail: virtio-dev-help@xxxxxxxxxxxxxxxxxxxx
>>
> 
> 
> 
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux