However, I think if you have locking rules that can fit into a VMA fault path and link move_notify to unmap_mapping_range() then you've got a pretty usuable API. > For cpu mmaps I'm more worried about the arch bits in the pte, stuff like > caching mode or encrypted memory bits and things like that. There's > vma->vm_pgprot, but it's a mess. But maybe this all is an incentive to > clean up that mess a bit. I'm convinced we need meta-data along with pfns, there is too much stuff that needs more information than just the address. Cachability, CC encryption, exporting device, etc. This is a topic to partially cross when we talk about how to fully remove struct page requirements from the new DMA API. I'm hoping we can get to something where we describe not just how the pfns should be DMA mapped, but also can describe how they should be CPU mapped. For instance that this PFN space is always mapped uncachable, in CPU and in IOMMU. We also have current bugs in the iommu/vfio side where we are fudging CC stuff, like assuming CPU memory is encrypted (not always true) and that MMIO is non-encrypted (not always true) > I thought iommuv2 (or whatever linux calls these) has full fault support > and could support current move semantics. But yeah for iommu without > fault support we need some kind of pin or a newly formalized revoke model. No, this is HW dependent, including PCI device, and I'm aware of no HW that fully implements this in a way that could be useful to implement arbitary move semantics for VFIO.. Jason