On Tue, Jan 31, 2023 at 05:54:45PM +0000, Sean Christopherson wrote: > On Tue, Jan 31, 2023, Oliver Upton wrote: > > On Tue, Jan 31, 2023 at 01:18:15AM +0000, Sean Christopherson wrote: > > > On Mon, Jan 30, 2023, Oliver Upton wrote: > > > > I think that Marc's suggestion of having userspace configure this is > > > > sound. After all, userspace _should_ know the granularity of the backing > > > > source it chose for guest memory. > > > > > > > > We could also interpret a cache size of 0 to signal that userspace wants > > > > to disable eager page split for a VM altogether. It is entirely possible that > > > > the user will want a differing QoS between slice-of-hardware and > > > > overcommitted VMs. > > > > > > Maybe. It's also entirely possible that QoS is never factored in, e.g. if QoS > > > guarantees for all VMs on a system are better met by enabling eager splitting > > > across the board. > > > > > > There are other reasons to use module/kernel params beyond what Marc listed, e.g. > > > to let the user opt out even when something is on by default. x86's TDP MMU has > > > benefited greatly from downstream users being able to do A/B performance testing > > > this way. I suspect x86's eager_page_split knob was added largely for this > > > reason, e.g. to easily see how a specific workload is affected by eager splitting. > > > That seems like a reasonable fit on the ARM side as well. > > > > There's a rather important distinction here in that we'd allow userspace > > to select the page split cache size, which should be correctly sized for > > the backing memory source. Considering the break-before-make rules of > > the architecture, the only way eager split is performant on arm64 is by > > replacing a block entry with a fully populated table hierarchy in one > > operation. AFAICT, you don't have this problem on x86, as the > > architecture generally permits a direct valid->valid transformation > > without an intermediate invalidation. Well, ignoring iTLB multihit :) > > > > So, the largest transformation we need to do right now is on a PUD w/ > > PAGE_SIZE=4K, leading to 513 pages as proposed in the series. Exposing > > that configuration option in a module parameter is presumptive that all > > VMs on a host use the exact same memory configuration, which doesn't > > feel right to me. > > Can you elaborate on the cache size needing to be tied to the backing source? The proposed eager split mechanism attempts to replace a block with a a fully populated page table hierarchy (i.e. mapped at PTE granularity) in order to avoid successive break-before-make invalidations. The cache size must be >= the number of pages required to build out that fully mapped page table hierarchy. > Do the issues arise if you get to a point where KVM can have PGD-sized hugepages > with PAGE_SIZE=4KiB? Those problems when splitting any hugepage larger than a PMD. It just so happens that the only configuration that supports larger mappings is 4K at the moment. If we were to take the step-down approach to eager page splitting, there will be a lot of knock-on break-before-make operations as we go PUD -> PMD -> PTE. > Or do you want to let userspace optimize _now_ for PMD+4KiB? The default cache value should probably optimize for PMD splitting and give userspace the option to scale that up for PUD or greater if it sees fit. -- Thanks, Oliver