On 1/16/24 08:19, Michael Roth wrote: > > So at the very least, if we went down this path, we would be worth > investigating the following areas in addition to general perf testing: > > 1) Only splitting directmap regions corresponding to kernel-allocatable > *data* (hopefully that's even feasible...) Take a look at the 64-bit memory map in here: https://www.kernel.org/doc/Documentation/x86/x86_64/mm.rst We already have separate mappings for kernel data and (normal) kernel text. > 2) Potentially deferring the split until an SNP guest is actually > run, so there isn't any impact just from having SNP enabled (though > you still take a hit from RMP checks in that case so maybe it's not > worthwhile, but that itself has been noted as a concern for users > so it would be nice to not make things even worse). Yes, this would be nice too. >> Actually, where _is_ the TLB flushing here? > Boris pointed that out in v6, and we implemented it in v7, but it > completely cratered performance: That *desperately* needs to be documented. How can it be safe to skip the TLB flush? It this akin to a page permission promotion where you go from RO->RW but can skip the TLB flush? In that case, the CPU will see the RO TLB entry violation, drop it, and re-walk the page tables, discovering the RW entry. Does something similar happen here where the CPU sees the 2M/4k conflict in the TLB, drops the 2M entry, does a re-walk then picks up the newly-split 2M->4k entries? I can see how something like that would work, but it's _awfully_ subtle to go unmentioned.