On Mon, Dec 5, 2022 at 10:17 AM Ben Gardon <bgardon@xxxxxxxxxx> wrote: > > On Thu, Dec 1, 2022 at 11:57 AM Vipin Sharma <vipinsh@xxxxxxxxxx> wrote: > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > > index 1782c4555d94..4d59c9d48277 100644 > > --- a/virt/kvm/kvm_main.c > > +++ b/virt/kvm/kvm_main.c > > @@ -384,6 +384,11 @@ static void kvm_flush_shadow_all(struct kvm *kvm) > > kvm_arch_guest_memory_reclaimed(kvm); > > } > > > > +void * __weak kvm_arch_mmu_get_free_page(int nid, gfp_t gfp_flags) > > +{ > > + return (void *)__get_free_page(gfp_flags); > > +} > > + > > Rather than making this __weak, you could use #ifdef CONFIG_NUMA to > just put all the code in the arch-neutral function. > I am not sure how it will work. Here, I am trying to keep this feature only for x86. This function will be used for all architecture except in x86 where we have different implementation in arch/x86/mmu/mmu.c So, even if CONFIG_NUMA is defined, we want to keep the same definition on other architectures. > > #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE > > static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc, > > gfp_t gfp_flags) > > @@ -393,7 +398,7 @@ static inline void *mmu_memory_cache_alloc_obj(struct kvm_mmu_memory_cache *mc, > > if (mc->kmem_cache) > > return kmem_cache_alloc(mc->kmem_cache, gfp_flags); > > else > > - return (void *)__get_free_page(gfp_flags); > > + return kvm_arch_mmu_get_free_page(mc->node, gfp_flags); > > } > > > > int __kvm_mmu_topup_memory_cache(struct kvm_mmu_memory_cache *mc, int capacity, int min) > > -- > > 2.39.0.rc0.267.gcb52ba06e7-goog > >