Re: [PATCH v5 1/5] RDMA/umem: Support importing dma-buf as user memory region

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Oct 17, 2020 at 9:05 PM Jason Gunthorpe <jgg@xxxxxxxx> wrote:
>
> On Thu, Oct 15, 2020 at 03:02:45PM -0700, Jianxin Xiong wrote:
>
> > +static void ib_umem_dmabuf_invalidate_cb(struct dma_buf_attachment *attach)
> > +{
> > +     struct ib_umem_dmabuf *umem_dmabuf = attach->importer_priv;
> > +
> > +     dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv);
> > +
> > +     ib_umem_dmabuf_unmap_pages(&umem_dmabuf->umem, true);
> > +     queue_work(ib_wq, &umem_dmabuf->work);
>
> Do we really want to queue remapping or should it wait until there is
> a page fault?
>
> What do GPUs do?

Atm no gpu drivers in upstream that use buffer-based memory management
and support page faults in the hw. So we have to pull the entire thing
in anyway and use the dma_fence stuff to track what's busy.

For faulting hardware I'd wait until the first page fault and then map
in the entire range again (you get the entire thing anyway). Since the
move_notify happened because the buffer is moving, you'll end up
stalling anyway. Plus if you prefault right away you need some
thrashing limiter to not do that when you get immediate move_notify
again. As a first thing I'd do the same thing you do for mmu notifier
ranges, since it's kinda similarish.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux