Fix a bug in kvm_clear_guest() where it would write beyond the target page _if_ handed a gpa+len that would span multiple pages. Luckily, the bug is unhittable in the current code base as all users ensure the gpa+len is bound to a single page. Patch 2 hardens the underlying single page APIs to guard against a bad offset+len, e.g. so that bugs like the one in kvm_clear_guest() are noisy and don't escalate to an out-of-bounds access. Verified and tested by hacking KVM to use kvm_clear_guest() when zeroing all three pages used for KVM's hidden TSS. diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index f18c2d8c7476..ce64e490e9c7 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -3872,14 +3872,17 @@ bool __vmx_guest_state_valid(struct kvm_vcpu *vcpu) static int init_rmode_tss(struct kvm *kvm, void __user *ua) { - const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0))); + // const void *zero_page = (const void *) __va(page_to_phys(ZERO_PAGE(0))); u16 data; - int i; + // int i; - for (i = 0; i < 3; i++) { - if (__copy_to_user(ua + PAGE_SIZE * i, zero_page, PAGE_SIZE)) - return -EFAULT; - } + if (kvm_clear_guest(kvm, to_kvm_vmx(kvm)->tss_addr, PAGE_SIZE * 3)) + return -EFAULT; + + // for (i = 0; i < 3; i++) { + // if (__copy_to_user(ua + PAGE_SIZE * i, zero_page, PAGE_SIZE)) + // return -EFAULT; + // } data = TSS_BASE_SIZE + TSS_REDIRECTION_SIZE; if (__copy_to_user(ua + TSS_IOPB_BASE_OFFSET, &data, sizeof(u16))) Sean Christopherson (2): KVM: Write the per-page "segment" when clearing (part of) a guest page KVM: Harden guest memory APIs against out-of-bounds accesses virt/kvm/kvm_main.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) base-commit: 15e1c3d65975524c5c792fcd59f7d89f00402261 -- 2.46.0.469.g59c65b2a67-goog