Rob Clark <robdclark@xxxxxxxxx> writes: > From: Rob Clark <robdclark@xxxxxxxxxxxx> > > Since there is no real device associated with VGEM, it is impossible to > end up with appropriate dev->dma_ops, meaning that we have no way to > invalidate the shmem pages allocated by VGEM. So, at least on platforms > without drm_cflush_pages(), we end up with corruption when cache lines > from previous usage of VGEM bo pages get evicted to memory. > > The only sane option is to use cached mappings. This may be an improvement, but... pin/unpin is only on attaching/closing the dma-buf, right? So, great, you flushed the cached map once after exporting the vgem dma-buf to the actual GPU device, but from then on you still have no interface for getting coherent access through VGEM's mapping again, which still exists. I feel like this is papering over something that's really just broken, and we should stop providing VGEM just because someone wants to write dma-buf test code without driver-specific BO alloc ioctl code.
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx