It is important to handle the exit code on the same CPU as the guest, specially as we're accessing resources that are per-CPU (caches, for exemple). To achieve this, make the section that encompassing both __kvm_vcpu_run() and handle_exit(). user_mem_abort() can sleep though (as it calls gfp_to_pfn()), so preemption has to be reenabled at that stage. Reported-by: Will Deacon <will.deacon at arm.com> Signed-off-by: Marc Zyngier <marc.zyngier at arm.com> --- arch/arm/kvm/arm.c | 2 ++ arch/arm/kvm/mmu.c | 2 ++ 2 files changed, 4 insertions(+), 0 deletions(-) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 13681a1..b96462b 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -522,6 +522,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) update_vttbr(vcpu->kvm); + preempt_disable(); local_irq_disable(); kvm_guest_enter(); vcpu->mode = IN_GUEST_MODE; @@ -540,6 +541,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) trace_kvm_exit(vcpu->arch.regs.pc); ret = handle_exit(vcpu, run, ret); + preempt_enable(); if (ret) { kvm_err("Error in handle_exit\n"); break; diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c index 992d39a..ae38c21 100644 --- a/arch/arm/kvm/mmu.c +++ b/arch/arm/kvm/mmu.c @@ -419,7 +419,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, pfn_t pfn; int ret; + preempt_enable(); pfn = gfn_to_pfn(vcpu->kvm, gfn); + preempt_disable(); if (is_error_pfn(pfn)) { put_page(pfn_to_page(pfn)); -- 1.7.3.4