On Thu, Jul 18, 2024 at 10:03:01AM -0400, Peter Xu wrote: > On Thu, Jul 18, 2024 at 09:50:31AM +0800, Yan Zhao wrote: > > Ok. Then if we have two sets of pfns, then we can > > 1. Call remap_pfn_range() in mmap() for pfn set 1. > > I don't think this will work.. At least from the current implementation, > remap_pfn_range() will only reserve the memtype if the range covers the > whole vma. Hmm, by referring to pfn set 1 and pfn set 2, I mean that they're both covering the entire vma, but at different times. To make it more accurately: Consider this hypothetical scenario (not the same as what's implemented in vfio-pci, but seems plausible): Suppose we have a vma covering only one page, then (1) Initially, the vma is mapped to pfn1, with remap_pfn_range(). (2) Subsequently, unmap_single_vma() is invoked to unmap the entire VMA. (3) The driver then maps the entire vma to pfn2 in fault handler Given this context, my questions are: 1. How can we reserve the memory type for pfn2? Should we call track_pfn_remap() in mmap() in advance? 2. How do we untrack the memory type for pfn1 and pfn2, considering they belong to the same VMA but mutual exclusively and not concurrently? Thanks Yan > > > 2. Export track_pfn_remap() and call track_pfn_remap() in mmap() for pfn > > set 2. > > 3. Unmap and call vmf_insert_pfn() in the fault handler to map pfn set 2. > > IMO this should be the similar case of MMIO being disabled on the bar, > where we can use track_pfn_remap() to register the whole vma in mmap() > first. Then in this case if you prefer proactive injection of partial of > the pfn mappings, one can do that via vmf_insert_pfn*() in mmap() after the > memtype registered. > > > > > However, I'm not sure if we can properly untrack both pfn sets 1 and 2. > > Untrack should so far only happen per-vma, AFAIU, so "set 1+2" need to be > done together as they belong to the same vma. > > > > > By exporting untrack_pfn() too? Or, you'll leave VFIO to use ioremap/iounmap() > > (or the variants) to reserve memtype by itself? > > Thanks, > > -- > Peter Xu >