Re: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 24, 2023 at 08:32:45PM +0000, Kasireddy, Vivek wrote:
> Hi Jason,
> 
> > 
> > On Mon, Jul 24, 2023 at 07:54:38AM +0000, Kasireddy, Vivek wrote:
> > 
> > > > I'm not at all familiar with the udmabuf use case but that sounds
> > > > brittle and effectively makes this notifier udmabuf specific right?
> > > Oh, Qemu uses the udmabuf driver to provide Host Graphics components
> > > (such as Spice, Gstreamer, UI, etc) zero-copy access to Guest created
> > > buffers. In other words, from a core mm standpoint, udmabuf just
> > > collects a bunch of pages (associated with buffers) scattered inside
> > > the memfd (Guest ram backed by shmem or hugetlbfs) and wraps
> > > them in a dmabuf fd. And, since we provide zero-copy access, we
> > > use DMA fences to ensure that the components on the Host and
> > > Guest do not access the buffer simultaneously.
> > 
> > So why do you need to track updates proactively like this?
> As David noted in the earlier series, if Qemu punches a hole in its memfd
> that goes through pages that are registered against a udmabuf fd, then
> udmabuf needs to update its list with new pages when the hole gets
> filled after (guest) writes. Otherwise, we'd run into the coherency 
> problem (between udmabuf and memfd) as demonstrated in the selftest
> (patch #3 in this series).

Holes created in VMA are tracked by invalidation, you haven't
explained why this needs to also see change.

BTW it is very jarring to hear you talk about files when working with
mmu notifiers. MMU notifiers do not track hole punches or memfds, they
track VMAs and PTEs. Punching a hole in a mmapped memfd will
invalidate the convering PTEs.

> > Trigger a move when the backing memory changes and re-acquire it with
> AFAICS, without this patch or adding new change_pte calls, there is no way to
> get notified when a new page is mapped into the backing memory of a memfd
> (backed by shmem or hugetlbfs) which happens after a hole punch followed
> by writes. 

Yes, we have never wanted to do this because is it racy.

If you still need the memory mapped then you re-call hmm_range_fault
and re-obtain it. hmm_range_fault will resolve all the races and you
get new pages.

> We can definitely get notified when a hole is punched via the
> invalidate notifiers though, but as I described earlier this is not very helpful
> for the udmabuf use-case.

I still don't understand why, or what makes udmabuf so special
compared to all the other places tracking VMA changes and using
hmm_range_fault.

Jason




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux