Re: [PATCH 1/2] drm/shmem: add support for per object dma api operations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 12, 2020 at 3:17 AM Gerd Hoffmann <kraxel@xxxxxxxxxx> wrote:
>
> On Fri, Jun 12, 2020 at 11:47:55AM +0200, Thomas Zimmermann wrote:
> > Hi
> >
> > Am 12.06.20 um 03:36 schrieb Gurchetan Singh:
> > > This is useful for the next patch.  Also, should we only unmap the
> > > amount entries that we mapped with the dma-api?
> >
> > It looks like you're moving virtio code into shmem.
>
> Well, not really.
>
> virtio has -- for historical reasons -- the oddity that it may or may
> not need to dma_map resources, depending on device configuration.
> Initially virtio went with "it's just a vm, lets simply operate on
> physical ram addresses".  That shortcut turned out to be a bad idea
> later on, especially with the arrival of iommu emulation support in
> qemu.  But we couldn't just scratch it for backward compatibility
> reasons.  See virtio_has_iommu_quirk().
>
> This just allows to enable/disable dma_map, I guess to fix some fallout
> from recent shmem helper changes

Yes, the main goal is to fix the double free.

diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c
b/drivers/gpu/drm/virtio/virtgpu_object.c
index 346cef5ce251..2f7b6cd25a4b 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -78,7 +78,6 @@ void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo)
                                shmem->mapped = 0;
                        }

-                       sg_free_table(shmem->pages);
                        shmem->pages = NULL;
                        drm_gem_shmem_unpin(&bo->base.base);
                }

Doing only that on it's own causes log spam though

[   10.368385] virtio_gpu virtio5: swiotlb buffer is full (sz: 4096
bytes), total 0 (slots), used 0 (slots)
[   10.384957] virtio_gpu virtio5: swiotlb buffer is full (sz: 4096
bytes), total 0 (slots), used 0 (slots)
[   10.389920] virtio_gpu virtio5: swiotlb buffer is full (sz: 4096
bytes), total 0 (slots), used 0 (slots)
[   10.396859] virtio_gpu virtio5: swiotlb buffer is full (sz: 4096
bytes), total 0 (slots), used 0 (slots)
[   10.401954] virtio_gpu virtio5: swiotlb buffer is full (sz: 4096
bytes), total 0 (slots), used 0 (slots)
[   10.406694] virtio_gpu virtio5: swiotlb buffer is full (sz: 8192
bytes), total 0 (slots), used 0 (slots)
[   10.495744] virtio_gpu virtio5: swiotlb buffer is full (sz: 4096
bytes), total 0 (slots), used 0 (slots)

Plus, I just realized the virtio dma ops and ops used by drm shmem are
different, so virtio would have to unconditionally have to skip the
shmem path.  Perhaps the best policy is to revert d323bb44e4d2?

> (Gurchetan, that kind of stuff should
> be mentioned in cover letter and commit messages).

Good tip.

>
> I'm not sure virtio actually needs that patch though.  I *think* doing
> the dma_map+dma_unmap unconditionally, but then ignore the result in
> case we don't need it should work.  And it shouldn't be a horrible
> performance hit either, in case we don't have a (virtual) iommu in the
> VM dma mapping is essentially a nop ...
>
> take care,
>   Gerd
>
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux