Avi Kivity wrote: >> + >> + if (!!is_pae(vcpu) != sp->role.cr4_pae || >> + is_nx(vcpu) != sp->role.nxe) >> + continue; >> + >> > > Do we also need to check cr0.wp? I think so. I think it's not too bad since we just decrease the access right, for example, we mark the mapping readonly for cr0.wp=0's page, the later write-access will cause #PF, and the read-access is OK. > >> if (gentry) >> mmu_pte_write_new_pte(vcpu, sp, spte,&gentry); >> > > Please move the checks to mmu_pte_write_new_pte(), it's a more logical > place. > > It means the reserved bits check happens multiple times, but that's ok. > OK > Also, you can use arch.mmu.base_role to compare: > > static const kvm_mmu_page_role mask = { .level = -1U, .cr4_pae = 1, > ... }; > > if ((sp->role.word ^ base_role.word) & mask.word) > return; OK, will update it :-) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html