Hello, Vivek All patches applied to misc-next with a small modification, thanks! Note: While verifying move_notify(), I noticed that AMD/TTM driver moves same shared FB GEMs after each framebuffer update when it renders into FB, despite of the 32GB BAR. This should be rather inefficient. I'd expect dma-buf staying static if there is no need to evict it. Something to check how it works with DG2. Fix: I made this change to the "Import prime buffers" patch after spotting possibility of having race condition between move_notify() and freeing GEM: diff --git a/drivers/gpu/drm/virtio/virtgpu_prime.c b/drivers/gpu/drm/virtio/virtgpu_prime.c index 8644b87d473d..688810d1b611 100644 --- a/drivers/gpu/drm/virtio/virtgpu_prime.c +++ b/drivers/gpu/drm/virtio/virtgpu_prime.c @@ -189,13 +189,18 @@ static void virtgpu_dma_buf_free_obj(struct drm_gem_object *obj) struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); struct virtio_gpu_device *vgdev = obj->dev->dev_private; struct dma_buf_attachment *attach = obj->import_attach; + struct dma_resv *resv = attach->dmabuf->resv; if (attach) { + dma_resv_lock(resv, NULL); + virtio_gpu_detach_object_fenced(bo); if (bo->sgt) - dma_buf_unmap_attachment_unlocked(attach, bo->sgt, - DMA_BIDIRECTIONAL); + dma_buf_unmap_attachment(attach, bo->sgt, + DMA_BIDIRECTIONAL); + + dma_resv_unlock(resv); dma_buf_detach(attach->dmabuf, attach); dma_buf_put(attach->dmabuf); @@ -268,7 +273,7 @@ static void virtgpu_dma_buf_move_notify(struct dma_buf_attachment *attach) struct drm_gem_object *obj = attach->importer_priv; struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj); - if (bo->created) { + if (bo->created && kref_read(&obj->refcount)) { virtio_gpu_detach_object_fenced(bo); if (bo->sgt) -- Best regards, Dmitry