On Fri, Jul 19, 2019 at 5:21 AM Joerg Roedel <jroedel@xxxxxxx> wrote: > > On Thu, Jul 18, 2019 at 12:04:49PM -0700, Andy Lutomirski wrote: > > I find it problematic that there is no meaningful documentation as to > > what vmalloc_sync_all() is supposed to do. > > Yeah, I found that too, there is no real design around > vmalloc_sync_all(). It looks like it was just added to fit the purpose > on x86-32. That also makes it hard to find all necessary call-sites. > > > Which is obviously entirely inapplicable. If I'm understanding > > correctly, the underlying issue here is that the vmalloc fault > > mechanism can propagate PGD entry *addition*, but nothing (not even > > flush_tlb_kernel_range()) propagates PGD entry *removal*. > > Close, the underlying issue is not about PGD, but PMD entry > addition/removal on x86-32 pae systems. > > > I find it suspicious that only x86 has this. How do other > > architectures handle this? > > The problem on x86-PAE arises from the !SHARED_KERNEL_PMD case, which was > introduced by the Xen-PV patches and then re-used for the PTI-x32 > enablement to be able to map the LDT into user-space at a fixed address. > > Other architectures probably don't have the !SHARED_KERNEL_PMD case (or > do unsharing of kernel page-tables on any level where a huge-page could > be mapped). > > > At the very least, I think this series needs a comment in > > vmalloc_sync_all() explaining exactly what the function promises to > > do. > > Okay, as it stands, it promises to sync mappings for the vmalloc area > between all PGDs in the system. I will add that as a comment. > > > But maybe a better fix is to add code to flush_tlb_kernel_range() > > to sync the vmalloc area if the flushed range overlaps the vmalloc > > area. > > That would also cause needless overhead on x86-64 because the vmalloc > area doesn't need syncing there. I can make it x86-32 only, but that is > not a clean solution imo. Could you move the vmalloc_sync_all() call to the lazy purge path, though? If nothing else, it will cause it to be called fewer times under any given workload, and it looks like it could be rather slow on x86_32. > > > Or, even better, improve x86_32 the way we did x86_64: adjust > > the memory mapping code such that top-level paging entries are never > > deleted in the first place. > > There is not enough address space on x86-32 to partition it like on > x86-64. In the default PAE configuration there are _four_ PGD entries, > usually one for the kernel, and then 512 PMD entries. Partitioning > happens on the PMD level, for example there is one entry (2MB of address > space) reserved for the user-space LDT mapping. Ugh, fair enough.