On 07/06/2010 01:44 PM, Xiao Guangrong wrote:
In the speculative path, we should check guest pte's reserved bits just as the real processor does Reported-by: Marcelo Tosatti<mtosatti@xxxxxxxxxx> Signed-off-by: Xiao Guangrong<xiaoguangrong@xxxxxxxxxxxxxx> --- arch/x86/kvm/mmu.c | 3 +++ arch/x86/kvm/paging_tmpl.h | 3 ++- 2 files changed, 5 insertions(+), 1 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 104756b..3dcd55d 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2781,6 +2781,9 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, break; } + if (is_rsvd_bits_set(vcpu, gentry, PT_PAGE_TABLE_LEVEL)) + gentry = 0; +
That only works if the gpte is for the same mode as the current vcpu mmu mode. In some cases it is too strict (vcpu in pae mode writing a 32-bit gpte), which is not too bad, in some cases it is too permissive (vcpu in nonpae mode writing a pae gpte).
(once upon a time mixed modes were rare, only on OS setup, but with nested virt they happen all the time).
mmu_guess_page_from_pte_write(vcpu, gpa, gentry); spin_lock(&vcpu->kvm->mmu_lock); if (atomic_read(&vcpu->kvm->arch.invlpg_counter) != invlpg_counter) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index dfb2720..19f0077 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -628,7 +628,8 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, pte_gpa = first_pte_gpa + i * sizeof(pt_element_t); if (kvm_read_guest_atomic(vcpu->kvm, pte_gpa,&gpte, - sizeof(pt_element_t))) + sizeof(pt_element_t)) || + is_rsvd_bits_set(vcpu, gpte, PT_PAGE_TABLE_LEVEL)) return -EINVAL;
This is better done a few lines down where we check for !is_present_gpte(), no?
-- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html