On 03.12.20 16:43, Peter Xu wrote: > On Thu, Dec 03, 2020 at 11:20:02AM +0000, Stefan Hajnoczi wrote: >> On Wed, Dec 02, 2020 at 10:45:11AM -0500, Peter Xu wrote: >>> On Wed, Dec 02, 2020 at 02:33:56PM +0000, Stefan Hajnoczi wrote: >>>> On Wed, Nov 25, 2020 at 10:57:11AM -0500, Peter Xu wrote: >>>>> On Wed, Nov 25, 2020 at 01:05:25AM +0000, Justin He wrote: >>>>>>> I'd appreciate if you could explain why vfio needs to dma map some >>>>>>> PROT_NONE >>>>>> >>>>>> Virtiofs will map a PROT_NONE cache window region firstly, then remap the sub >>>>>> region of that cache window with read or write permission. I guess this might >>>>>> be an security concern. Just CC virtiofs expert Stefan to answer it more accurately. >>>>> >>>>> Yep. Since my previous sentence was cut off, I'll rephrase: I was thinking >>>>> whether qemu can do vfio maps only until it remaps the PROT_NONE regions into >>>>> PROT_READ|PROT_WRITE ones, rather than trying to map dma pages upon PROT_NONE. >>>> >>>> Userspace processes sometimes use PROT_NONE to reserve virtual address >>>> space. That way future mmap(NULL, ...) calls will not accidentally >>>> allocate an address from the reserved range. >>>> >>>> virtio-fs needs to do this because the DAX window mappings change at >>>> runtime. Initially the entire DAX window is just reserved using >>>> PROT_NONE. When it's time to mmap a portion of a file into the DAX >>>> window an mmap(fixed_addr, ...) call will be made. >>> >>> Yes I can understand the rational on why the region is reserved. However IMHO >>> the real question is why such reservation behavior should affect qemu memory >>> layout, and even further to VFIO mappings. >>> >>> Note that PROT_NONE should likely mean that there's no backing page at all in >>> this case. Since vfio will pin all the pages before mapping the DMAs, it also >>> means that it's at least inefficient, because when we try to map all the >>> PROT_NONE pages we'll try to fault in every single page of it, even if they may >>> not ever be used. >>> >>> So I still think this patch is not doing the right thing. Instead we should >>> somehow teach qemu that the virtiofs memory region should only be the size of >>> enabled regions (with PROT_READ|PROT_WRITE), rather than the whole reserved >>> PROT_NONE region. >> >> virtio-fs was not implemented with IOMMUs in mind. The idea is just to >> install a kvm.ko memory region that exposes the DAX window. >> >> Perhaps we need to treat the DAX window like an IOMMU? That way the >> virtio-fs code can send map/unmap notifications and hw/vfio/ can >> propagate them to the host kernel. > > Sounds right. One more thing to mention is that we may need to avoid tearing > down the whole old DMA region when resizing the PROT_READ|PROT_WRITE region > into e.g. a bigger one to cover some of the previusly PROT_NONE part, as long > as if the before-resizing region is still possible to be accessed from any > hardware. It smells like something David is working with virtio-mem, not sure > whether there's any common infrastructure that could be shared. "somehow teach qemu that the virtiofs memory region should only be the size of enabled regions" - for virtio-mem, I'm working on resizeable RAM blocks/RAM memory regions. Fairly complicated, that's why I deferred upstreaming it and still need to implement plenty of special cases for atomic resizes (e.g., vhost-user). But it's only one part of the puzzle for virtio-fs. AFAIU, it's not only about resizing the region for virtio-fs - we can have PROT_NONE holes anywhere inside there. In vfio, you cannot shrink mappings atomically. Growing works, but requires additional mappings (-> bad). So assume you mapped a file with size X and want to resize it. You first have to unmap + remap with the new size. This is not atomic, thus problematic. For virtio-mem, we have a fix block size and can map/unmap in that granularity whenever we populate/discard memory within the device-managed region. I don't think that applies for files in case of virtio-fs. The real question is: do we even *need* DMA from vfio devices to virtio-fs regions? If not (do guests rely on it? what does the spec state?), just don't care about vfio at all and don't map anything. But maybe I am missing something important Q1: Is the virtio-fs region mapped into system address space, so we have to map everything without a vIOMMU? Q2: Is DMA from vfio devices to virtio-fs regions a valid use case? -- Thanks, David / dhildenb