> > > > > Right. Thinking about this I would be more concerned about the fact > > > > > that > > > > > guest can effectively pin amount of host's page cache upto size of > > > > > the > > > > > device/file passed to guest as PMEM, can't it Pankaj? Or is there > > > > > some > > > > > QEMU > > > > > magic that avoids this? > > > > > > > > Yes, guest will pin these host page cache pages using 'get_user_pages' > > > > by > > > > elevating the page reference count. But these pages can be reclaimed by > > > > host > > > > at any time when there is memory pressure. > > > > > > Wait, how can the guest pin the host pages? I would expect this to > > > happen only when using vfio and device assignment. Otherwise, no the > > > host can't reclaim a pinned page, that's the whole point of a pin to > > > prevent the mm from reclaiming ownership. > > > > yes. You are right I just used the pin word but it does not actually pin > > pages > > permanently. I had gone through the discussion on existing problems with > > get_user_pages and DMA e.g [1] to understand Jan's POV. It does mention GUP > > pin pages so I also used the word 'pin'. But guest does not permanently pin > > these pages and these pages can be reclaimed by host. > > OK, then I was just confused how virtio-pmem is going to work. Thanks for > explanation! So can I imagine this as guest mmaping the host file and > providing the mapped range as "NVDIMM pages" to the kernel inside the > guest? Or is it more complex? yes, that's correct. Host's Qemu process virtual address range is used as guest physical address and a direct mapping(EPT/NPT) is established. At guest side, this physical memory range is plugged into guest system memory map and DAX mapping is setup using nvdimm calls. Thanks, Pankaj