On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote: > > If we assign obj->filp, we believe that the create vgem bo is native and > allow direct operations like mmap() assuming it behaves as backed by a > shmemfs inode. When imported from a dmabuf, the obj->pages are > not always meaningful and the shmemfs backing store misleading. > > Note, that regular mmap access to a vgem bo is via the dumb buffer API, > and that rejects attempts to mmap an imported dmabuf, What do you mean by "regular mmap access" here? It looks like vgem is using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call drm_gem_dumb_map_offset > > drm_gem_dumb_map_offset(): > if (obj->import_attach) return -EINVAL; > > So the only route by which we might accidentally allow mmapping of an > imported buffer is via vgem_prime_mmap(), which checked for > obj->filp assuming that it would be NULL. > > Well it would had it been updated to use the common > drm_gem_dum_map_offset() helper, instead it has > > vgem_gem_dumb_map(): > if (!obj->filp) return -EINVAL; > > falling foul of the same trap as above. > > Reported-by: Lepton Wu <ytht.net@xxxxxxxxx> > Fixes: af33a9190d02 ("drm/vgem: Enable dmabuf import interfaces") > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Cc: Lepton Wu <ytht.net@xxxxxxxxx> > Cc: Daniel Vetter <daniel@xxxxxxxx> > Cc: Christian König <christian.koenig@xxxxxxx> > Cc: Thomas Hellström (Intel) <thomas_os@xxxxxxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> # v4.13+ > --- > drivers/gpu/drm/vgem/vgem_drv.c | 27 +++++++++++++++++---------- > 1 file changed, 17 insertions(+), 10 deletions(-) > > diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c > index 909eba43664a..eb3b7cdac941 100644 > --- a/drivers/gpu/drm/vgem/vgem_drv.c > +++ b/drivers/gpu/drm/vgem/vgem_drv.c > @@ -91,7 +91,7 @@ static vm_fault_t vgem_gem_fault(struct vm_fault *vmf) > ret = 0; > } > mutex_unlock(&obj->pages_lock); > - if (ret) { > + if (ret && obj->base.filp) { > struct page *page; > > page = shmem_read_mapping_page( > @@ -157,7 +157,8 @@ static void vgem_postclose(struct drm_device *dev, struct drm_file *file) > } > > static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev, > - unsigned long size) > + struct file *shmem, > + unsigned long size) > { > struct drm_vgem_gem_object *obj; > int ret; > @@ -166,11 +167,8 @@ static struct drm_vgem_gem_object *__vgem_gem_create(struct drm_device *dev, > if (!obj) > return ERR_PTR(-ENOMEM); > > - ret = drm_gem_object_init(dev, &obj->base, roundup(size, PAGE_SIZE)); > - if (ret) { > - kfree(obj); > - return ERR_PTR(ret); > - } > + drm_gem_private_object_init(dev, &obj->base, size); > + obj->base.filp = shmem; > > mutex_init(&obj->pages_lock); > > @@ -189,11 +187,20 @@ static struct drm_gem_object *vgem_gem_create(struct drm_device *dev, > unsigned long size) > { > struct drm_vgem_gem_object *obj; > + struct file *shmem; > int ret; > > - obj = __vgem_gem_create(dev, size); > - if (IS_ERR(obj)) > + size = roundup(size, PAGE_SIZE); > + > + shmem = shmem_file_setup(DRIVER_NAME, size, VM_NORESERVE); > + if (IS_ERR(shmem)) > + return ERR_CAST(shmem); > + > + obj = __vgem_gem_create(dev, shmem, size); > + if (IS_ERR(obj)) { > + fput(shmem); > return ERR_CAST(obj); > + } > > ret = drm_gem_handle_create(file, &obj->base, handle); > if (ret) { > @@ -363,7 +370,7 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev, > struct drm_vgem_gem_object *obj; > int npages; > > - obj = __vgem_gem_create(dev, attach->dmabuf->size); > + obj = __vgem_gem_create(dev, NULL, attach->dmabuf->size); > if (IS_ERR(obj)) > return ERR_CAST(obj); > > -- > 2.27.0 >