On Sun, Nov 22 2020 at 15:16, Andy Lutomirski wrote: > On Fri, Nov 20, 2020 at 1:29 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote: >> Anyway, clearly I'm the only one that cares, so I'll just crawl back >> under my rock... > > I'll poke my head out of the rock for a moment, though... > > Several years ago, we discussed (in person at some conference IIRC) > having percpu pagetables to get sane kmaps, percpu memory, etc. Yes, I remember. That was our initial reaction in Prague to the looming PTI challenge 3 years ago. > The conclusion was that Linus thought the performance would suck and > we shouldn't do it. Linus had opinions, but we all agreed that depending on the workload and the CPU features (think !PCID) the copy/pagefault overhead could be significant. > Since then, though, we added really fancy infrastructure for keeping > track of a per-CPU list of recently used mms and efficiently tracking > when they need to be invalidated. We called these "ASIDs". It would > be fairly straightforward to have an entire pgd for each (cpu, asid) > pair. Newly added second-level (p4d/pud/whatever -- have I ever > mentioned how much I dislike the Linux pagetable naming conventions > and folding tricks?) tables could be lazily faulted in, and copies of > the full 2kB mess would only be neeced when a new (cpu,asid) is > allocated because either a flush happened while the mm was inactive on > the CPU in question or because the mm fell off the percpu cache. > > The total overhead would be a bit more cache usage, 4kB * num cpus * > num ASIDs per CPU (or 8k for PTI), and a few extra page faults (max > num cpus * 256 per mm over the entire lifetime of that mm). > The common case of a CPU switching back and forth between a small > number of mms would have no significant overhead. For CPUs which do not support PCID this sucks, which is everything pre Westmere and all of 32bit. Yes, 32bit. If we go there then 32bit has to bite the bullet and use the very same mechanism. Not that I care much TBH. Even for those CPUs which support it we'd need to increase the number of ASIDs significantly. Right now we use only 6 ASIDs, which is not a lot. There are process heavy workloads out there which do quite some context switching so avoiding the copy matters. I'm not worried about fork as the copy will probably be just noise. That said, I'm not saying it shouldn't be done, but there are quite a few things which need to be looked at. TBH, I really would love to see that just to make GS kernel usage and the related mess in the ASM code go away completely. For the task at hand, i.e. replacing kmap_atomic() by kmap_local(), this is not really helpful because we'd need to make all highmem using architectures do the same thing. But if we can pull it off on x86 the required changes for the kmap_local() code are not really significant. > On an unrelated note, what happens if you migrate_disable(), sleep for > a looooong time, and someone tries to offline your CPU? The hotplug code will prevent the CPU from going offline in that case, i.e. it waits until the last task left it's migrate disabled section. But you are not supposed to invoke sleep($ETERNAL) in such a context. Emphasis on 'not supposed' :) Thanks, tglx