On Mon, Oct 12, 2020 at 12:49 PM Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote: > Quoting Daniel Vetter (2020-10-09 17:16:06) > > On Fri, Oct 9, 2020 at 12:21 PM Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote: > > > > > > vgem is a minimalistic driver that provides shmemfs objects to > > > userspace that may then be used as an in-memory surface and transported > > > across dma-buf to other drivers. Since it's introduction, > > > drm_gem_shmem_helper now provides the same shmemfs facilities and so we > > > can trim vgem to wrap the helper. > > > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > > --- > > > drivers/gpu/drm/Kconfig | 1 + > > > drivers/gpu/drm/vgem/vgem_drv.c | 281 ++------------------------------ > > > drivers/gpu/drm/vgem/vgem_drv.h | 11 -- > > > 3 files changed, 13 insertions(+), 280 deletions(-) > > > > Nice diffstat :-) > > > > Reviewed-by: Daniel Vetter <daniel.vetter@xxxxxxxx> > > Unfortunately I had to drop the drm_gem_prime_mmap() since the existing > expectation is that we hand the faulthandler off to shmemfs so we can > release the module while the memory is exported. That sounds like a broken igt. Once we have refcounting for outstanding dma_fence/buf or anything else we'll block unloading of the module (not unbinding of the driver). Which one is that? > The other issue happens > to be for arch/x86 where just setting PAT=WC on the PTE does not flush > the cache for that page, and the CPU will preferentially use the cache. > That has caught us out more than once. Ah, the old disappointment around wc and dma-api on x86 I guess :-/ -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx