On Thu, Nov 24, 2022 at 06:24:52PM +0100, Daniel Vetter wrote: > On Thu, Nov 24, 2022 at 11:11:21AM +0000, Robin Murphy wrote: > > On 2022-11-23 17:28, Daniel Vetter wrote: > > > This code was added in b65e64f7ccd4 ("drm/cma: Use > > > dma_mmap_writecombine() to mmap buffer"), but does not explain why > > > it's needed. > > > > > > It should be entirely unnecessary, because remap_pfn_range(), which is > > > what the various dma_mmap functiosn are built on top of, does already > > > unconditionally adjust the vma flags: > > > > Not all dma_mmap_*() implementations use remap_pfn_range() though. For > > instance on ARM where one set of DMA ops uses vm_map_pages(), but AFAICS not > > all the relevant drivers would set VM_MIXEDMAP to prevent reaching the > > BUG_ON(vma->vm_flags & VM_PFNMAP) in there. > > Uh a dma_mmap which does not use VM_PFNMAP has pretty good chances of > being busted, since that allows get_user_pages on these memory > allocations. And I'm really not sure that's a bright idea ... > > Can you please point me at these dma-ops so that I can try and understand > what they're trying to do? I've taken a trip back in time to v3.14, when b65e64f7ccd4 was merged, and checked the x86, arm and arm64 dma mapping implementations. They all call remap_pfn_range() except for arm_iommu_mmap_attrs(), which uses vm_insert_page(). It was implemented as int vm_insert_page(struct vm_area_struct *vma, unsigned long addr, struct page *page) { if (addr < vma->vm_start || addr >= vma->vm_end) return -EFAULT; if (!page_count(page)) return -EINVAL; if (!(vma->vm_flags & VM_MIXEDMAP)) { BUG_ON(down_read_trylock(&vma->vm_mm->mmap_sem)); BUG_ON(vma->vm_flags & VM_PFNMAP); vma->vm_flags |= VM_MIXEDMAP; } return insert_page(vma, addr, page, vma->vm_page_prot); } I don't recall if I just hit that BUG_ON() back then, or if I actually understood what I was doing :-) Today in mainline arm_iommu_mmap_attrs() uses vm_map_pages(), which calls vm_insert_page(), and the BUG_ON() is still there. > > > https://elixir.bootlin.com/linux/v6.1-rc6/source/mm/memory.c#L2518 > > > > > > More importantly, it does uncondtionally set VM_PFNMAP, so clearing > > > that does not make much sense. > > > > > > Patch motived by discussions around enforcing VM_PFNMAP semantics for > > > all dma-buf users, where Thomas asked why dma helpers will work with > > > that dma_buf_mmap() contract. > > > > > > References: https://lore.kernel.org/dri-devel/5c3c8d4f-2c06-9210-b00a-4d0ff6f6fbb7@xxxxxxx/ > > > Cc: Laurent Pinchart <laurent.pinchart+renesas@xxxxxxxxxxxxxxxx> > > > Cc: Dave Airlie <airlied@xxxxxxxxxx> > > > Cc: Maarten Lankhorst <maarten.lankhorst@xxxxxxxxxxxxxxx> > > > Cc: Maxime Ripard <mripard@xxxxxxxxxx> > > > Cc: Thomas Zimmermann <tzimmermann@xxxxxxx> > > > Cc: Sumit Semwal <sumit.semwal@xxxxxxxxxx> > > > Cc: "Christian König" <christian.koenig@xxxxxxx> > > > Signed-off-by: Daniel Vetter <daniel.vetter@xxxxxxxxx> > > > --- > > > drivers/gpu/drm/drm_gem_dma_helper.c | 7 ++----- > > > 1 file changed, 2 insertions(+), 5 deletions(-) > > > > > > diff --git a/drivers/gpu/drm/drm_gem_dma_helper.c b/drivers/gpu/drm/drm_gem_dma_helper.c > > > index 1e658c448366..637a5cc62457 100644 > > > --- a/drivers/gpu/drm/drm_gem_dma_helper.c > > > +++ b/drivers/gpu/drm/drm_gem_dma_helper.c > > > @@ -525,13 +525,10 @@ int drm_gem_dma_mmap(struct drm_gem_dma_object *dma_obj, struct vm_area_struct * > > > int ret; > > > /* > > > - * Clear the VM_PFNMAP flag that was set by drm_gem_mmap(), and set the > > > - * vm_pgoff (used as a fake buffer offset by DRM) to 0 as we want to map > > > - * the whole buffer. > > > + * Set the vm_pgoff (used as a fake buffer offset by DRM) to 0 as we > > > + * want to map the whole buffer. > > > */ > > > vma->vm_pgoff -= drm_vma_node_start(&obj->vma_node); > > > - vma->vm_flags &= ~VM_PFNMAP; > > > - vma->vm_flags |= VM_DONTEXPAND; > > > if (dma_obj->map_noncoherent) { > > > vma->vm_page_prot = vm_get_page_prot(vma->vm_flags); -- Regards, Laurent Pinchart