On 2024-08-29 13:18:27, Sean Christopherson wrote: > On Thu, Aug 29, 2024, Vipin Sharma wrote: > > @@ -871,8 +871,17 @@ void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp) > > return; > > > > ++kvm->stat.nx_lpage_splits; > > - list_add_tail(&sp->possible_nx_huge_page_link, > > - &kvm->arch.possible_nx_huge_pages); > > + if (is_tdp_mmu_page(sp)) { > > +#ifdef CONFIG_X86_64 > > + ++kvm->arch.tdp_mmu_possible_nx_huge_pages_count; > > + list_add_tail(&sp->possible_nx_huge_page_link, > > + &kvm->arch.tdp_mmu_possible_nx_huge_pages); > > +#endif > > Pass in the count+list, that way there's no #ifdef and no weird questions for > what happens if the impossible happens (is_tdp_mmu_page() on 32-bit KVM). > > void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp, > u64 *nr_pages, struct list_head *pages) > { > /* > * If it's possible to replace the shadow page with an NX huge page, > * i.e. if the shadow page is the only thing currently preventing KVM > * from using a huge page, add the shadow page to the list of "to be > * zapped for NX recovery" pages. Note, the shadow page can already be > * on the list if KVM is reusing an existing shadow page, i.e. if KVM > * links a shadow page at multiple points. > */ > if (!list_empty(&sp->possible_nx_huge_page_link)) > return; > > ++kvm->stat.nx_lpage_splits; > ++(*nr_pages); > list_add_tail(&sp->possible_nx_huge_page_link, pages); > } > Sounds good, I was not sure if passing pointers and incrementing count via pointer will be accepted.