2018-04-24 07:48+0200, Christian Borntraeger: > On 04/23/2018 09:48 PM, David Rientjes wrote: > > On Mon, 23 Apr 2018, David Matlack wrote: > > > >>> Seems to be > >>> struct hlist_head mmu_page_hash[4096]; /* 24 32768 */ > >> > >>> in kvm_arch. > >> > >>> This is now much larger mostly due to > >> > >>> commit 114df303a7eeae8b50ebf68229b7e647714a9bea > >>> kvm: x86: reduce collisions in mmu_page_hash > >> > >> > >>> So maybe it is enough to allocate mmu_page_hash seperately? Adding David > >> Matlack > >>> for opinions. > >> > >> Allocating mmu_page_hash separately would not create any issues I can think > >> of. But Mark's concern about future bloat still remains. > >> > >> One option to move forward would be to make this change x86-specific by > >> overriding kvm_arch_{alloc,free}_vm. Other architectures could do the same > >> once they check that it's safe. > > That would certainly work. On the other hand it would be good if we could keep > as much code as possible common across architectures. > Paolo, Radim, any preference? I'd prefer to have one patch that switches all architectures to vzalloc (this patch + hunk for x86), get acks from maintainers, and include it in linux/next early (rc4 timeframe). > > x86 already appears to be doing that in kvm_x86_ops, but struct kvm_arch > > still is embedded into struct kvm. I'm wondering if there are any issues > > with dynamically allocating kvm_arch for all architectures and then having > > a per-arch override for how to allocate struct kvm_arch (kzalloc or > > vzalloc or something else) depending on its needs? > > The disadvantage of allocation vs embedding is that this is one additional pointer > chase for code that could be cache cold (e.g. guest was running). I have absolutely > no idea if that matters at all or not. Yeah, better not to rely on CPU optimizations. :) I would keep it embedded as I imagine that the part of KVM VM that needs kzalloc would be small on any architecture and therefore better allocated separately, instead of constricting the whole structure.