On Tue, Nov 21, 2017 at 3:42 PM, Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> wrote: > On 11/21/2017 03:32 PM, Andy Lutomirski wrote: >>> To do this, we need to special-case the kernel page table walker to deal >>> with PTEs only since we can't just grab PMD or PUD flags and stick them >>> in a PTE. We would only be able to use this path when populating things >>> that we know are 4k-mapped in the kernel. >> I'm not sure I'm understanding the issue. We'd promise to map the >> cpu_entry_area without using large pages, but I'm not sure I know what >> you're referring to. The only issue I see is that we'd have to be >> quite careful when tearing down the user tables to avoid freeing the >> shared part. > > It's just that it currently handles large and small pages in the kernel > mapping that it's copying. If we want to have it just copy the PTE, > we've got to refactor things a bit to separate out the PTE flags from > the paddr being targeted, and also make sure we don't munge the flags > conversion from the large-page entries to 4k PTEs. The PAT and PSE bits > cause a bit of trouble here. I'm confused. I mean something like: unsigned long start = (unsigned long)get_cpu_entry_area(cpu); for (unsigned long addr = start; addr < start + sizeof(struct cpu_entry_area); addr += PAGE_SIZE) { pte_t pte = *pte_offset_k(addr); /* or however you do this */ kaiser_add_mapping(pte_pfn(pte), pte_prot(pte)); } modulo the huge pile of typos in there that surely exist. But I still prefer my approach of just sharing the cpu_entry_area pmd entries between the user and kernel tables. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>