Avi Kivity wrote: > On 05/30/2010 03:36 PM, Xiao Guangrong wrote: >> Introduce for_each_gfn_sp(), for_each_gfn_indirect_sp() and >> for_each_gfn_indirect_valid_sp() to cleanup hlist traverseing >> >> Signed-off-by: Xiao Guangrong<xiaoguangrong@xxxxxxxxxxxxxx> >> --- >> arch/x86/kvm/mmu.c | 129 >> ++++++++++++++++++++++------------------------------ >> 1 files changed, 54 insertions(+), 75 deletions(-) >> >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index 56f8c3c..84c705e 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -1200,6 +1200,22 @@ static void kvm_unlink_unsync_page(struct kvm >> *kvm, struct kvm_mmu_page *sp) >> >> static int kvm_mmu_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp); >> >> +#define for_each_gfn_sp(kvm, sp, gfn, pos, n) \ >> + hlist_for_each_entry_safe(sp, pos, n, \ >> + &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)], hash_link)\ >> + if (sp->gfn == gfn) >> Avi, Thanks for your review. > > > if (...) > for_each_gfn_sp(...) > blah(); > else > BUG(); > > will break. Can do 'if ((sp)->gfn != (gfn)) ; else'. > > Or call functions from the for (;;) parameters to advance the cursor. > > (also use parentheses to protect macro arguments) > Yeah, it's my mistake, i'll fix it in the next version. > > >> + >> +#define for_each_gfn_indirect_sp(kvm, sp, gfn, pos, n) \ >> + hlist_for_each_entry_safe(sp, pos, n, \ >> + &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)], hash_link)\ >> + if (sp->gfn == gfn&& !sp->role.direct) >> + >> +#define for_each_gfn_indirect_valid_sp(kvm, sp, gfn, pos, n) \ >> + hlist_for_each_entry_safe(sp, pos, n, \ >> + &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)], hash_link)\ >> + if (sp->gfn == gfn&& !sp->role.direct&& \ >> + !sp->role.invalid) >> > > Shouldn't we always skip invalid gfns? Actually, in kvm_mmu_unprotect_page() function, it need find out invalid shadow pages: | hlist_for_each_entry_safe(sp, node, n, bucket, hash_link) | if (sp->gfn == gfn && !sp->role.direct) { | pgprintk("%s: gfn %lx role %x\n", __func__, gfn, | sp->role.word); | r = 1; | if (kvm_mmu_zap_page(kvm, sp)) | goto restart; | } I'm not sure whether we can skip invalid sp here, since it can change this function's return value. :-( > What about providing both gfn and role to the macro? > In current code, no code simply use role and gfn to find sp, in kvm_mmu_get_page(), we need do other work for 'sp->gfn == gfn && sp->role != role' sp, and other functions only need compare some members in role, but not all members. Xiao -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html