On Mon, Aug 12, 2024, Vipin Sharma wrote: > -static void kvm_recover_nx_huge_pages(struct kvm *kvm) > +/* > + * Get the first shadow mmu page of desired type from the NX huge pages list. > + * Return NULL if list doesn't have the needed page with in the first max pages. > + */ > +struct kvm_mmu_page *kvm_mmu_possible_nx_huge_page(struct kvm *kvm, bool tdp_mmu, > + ulong max) My preference is "unsigned long" over "unlong". Line lengths be damned, for this case ;-). > { > - unsigned long nx_lpage_splits = kvm->stat.nx_lpage_splits; > - struct kvm_memory_slot *slot; > - int rcu_idx; > - struct kvm_mmu_page *sp; > - unsigned int ratio; > - LIST_HEAD(invalid_list); > - bool flush = false; > - ulong to_zap; > + struct kvm_mmu_page *sp = NULL; > + ulong i = 0; > > - rcu_idx = srcu_read_lock(&kvm->srcu); > - write_lock(&kvm->mmu_lock); > + /* > + * We use a separate list instead of just using active_mmu_pages because > + * the number of shadow pages that be replaced with an NX huge page is > + * expected to be relatively small compared to the total number of shadow > + * pages. And because the TDP MMU doesn't use active_mmu_pages. > + */ > + list_for_each_entry(sp, &kvm->arch.possible_nx_huge_pages, possible_nx_huge_page_link) { > + if (i++ >= max) > + break; > + if (is_tdp_mmu_page(sp) == tdp_mmu) > + return sp; > + } This is silly and wasteful. E.g. in the (unlikely) case there's one TDP MMU page amongst hundreds/thousands of shadow MMU pages, this will walk the list until @max, and then move on to the shadow MMU. Why not just use separate lists?