Hi Hugh, > > On Mon, 24 Jul 2023, Kasireddy, Vivek wrote: > > Hi Jason, > > > On Mon, Jul 24, 2023 at 07:54:38AM +0000, Kasireddy, Vivek wrote: > > > > > > > > I'm not at all familiar with the udmabuf use case but that sounds > > > > > brittle and effectively makes this notifier udmabuf specific right? > > > > Oh, Qemu uses the udmabuf driver to provide Host Graphics > components > > > > (such as Spice, Gstreamer, UI, etc) zero-copy access to Guest created > > > > buffers. In other words, from a core mm standpoint, udmabuf just > > > > collects a bunch of pages (associated with buffers) scattered inside > > > > the memfd (Guest ram backed by shmem or hugetlbfs) and wraps > > > > them in a dmabuf fd. And, since we provide zero-copy access, we > > > > use DMA fences to ensure that the components on the Host and > > > > Guest do not access the buffer simultaneously. > > > > > > So why do you need to track updates proactively like this? > > As David noted in the earlier series, if Qemu punches a hole in its memfd > > that goes through pages that are registered against a udmabuf fd, then > > udmabuf needs to update its list with new pages when the hole gets > > filled after (guest) writes. Otherwise, we'd run into the coherency > > problem (between udmabuf and memfd) as demonstrated in the selftest > > (patch #3 in this series). > > Wouldn't this all be very much better if Qemu stopped punching holes there? I think holes can be punched anywhere in the memfd for various reasons. Some of the use-cases where this would be done were identified by David. Here is what he said in an earlier discussion: "There are *probably* more issues on the QEMU side when udmabuf is paired with things like MADV_DONTNEED/FALLOC_FL_PUNCH_HOLE used for virtio-balloon, virtio-mem, postcopy live migration, ... for example, in" Thanks, Vivek > > Hugh