On Fri, Dec 11, 2015 at 2:33 PM, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > On Thu, 10 Dec 2015 19:21:43 -0800 Andy Lutomirski <luto@xxxxxxxxxx> wrote: > >> The x86 vvar mapping contains pages with differing cacheability >> flags. This is currently only supported using (io_)remap_pfn_range, >> but those functions can't be used inside page faults. > > Foggy. What does "support" mean here? We currently have a hack in which every x86 mm has a "vvar" vma that has a .fault handler that always fails (it's the vm_special_mapping fault handler backed by an empty pages array). To make everything work, at mm startup, the vdso code uses remap_pfn_range and io_remap_pfn_range to poke the pfns into the page tables. I'd much rather implement this using the new .fault mechanism, and the canonical way to implement .fault seems to be vm_insert_pfn, and vm_insert_pfn doesn't allow setting per-page cacheability. Unfortunately, one of the three x86 vvar pages needs to be uncacheable because it's a genuine IO page, so I can't use vm_insert_pfn. I suppose I could just call io_remap_pfn_range from .fault, but I think that's frowned upon. Admittedly, I wasn't really sure *why* that's frowned upon. This goes way back to 2007 (e0dc0d8f4a327d033bfb63d43f113d5f31d11b3c) when .fault got fancier. > >> Add vm_insert_pfn_prot to support varying cacheability within the >> same non-COW VMA in a more sane manner. > > Here, "support" presumably means "insertion of pfns". Can we spell all > this out more completely please? Yes, will fix. > >> x86 needs this to avoid a CRIU-breaking and memory-wasting explosion >> of VMAs when supporting userspace access to the HPET. >> > > OtherwiseAck. --Andy -- Andy Lutomirski AMA Capital Management, LLC -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>