Hi Drew, On 21/08/2019 20:50, Andrew Jones wrote: > If after an MMIO exit to userspace a VCPU is immediately run with an > immediate_exit request, such as when a signal is delivered or an MMIO > emulation completion is needed, then the VCPU completes the MMIO > emulation and immediately returns to userspace. As the exit_reason > does not get changed from KVM_EXIT_MMIO in these cases we have to > be careful not to complete the MMIO emulation again, when the VCPU is > eventually run again, because the emulation does an instruction skip > (and doing too many skips would be a waste of guest code :-) We need > to use additional VCPU state to track if the emulation is complete. > As luck would have it, we already have 'mmio_needed', which even > appears to be used in this way by other architectures already. > > Fixes: 0d640732dbeb ("arm64: KVM: Skip MMIO insn after emulation") > Signed-off-by: Andrew Jones <drjones@xxxxxxxxxx> > --- > virt/kvm/arm/arm.c | 3 ++- > virt/kvm/arm/mmio.c | 1 + > 2 files changed, 3 insertions(+), 1 deletion(-) > > diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c > index 35a069815baf..322cf9030bbe 100644 > --- a/virt/kvm/arm/arm.c > +++ b/virt/kvm/arm/arm.c > @@ -669,7 +669,8 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run) > if (ret) > return ret; > > - if (run->exit_reason == KVM_EXIT_MMIO) { > + if (vcpu->mmio_needed) { > + vcpu->mmio_needed = 0; > ret = kvm_handle_mmio_return(vcpu, vcpu->run); > if (ret) > return ret; > diff --git a/virt/kvm/arm/mmio.c b/virt/kvm/arm/mmio.c > index a8a6a0c883f1..2d9b5e064ae0 100644 > --- a/virt/kvm/arm/mmio.c > +++ b/virt/kvm/arm/mmio.c > @@ -201,6 +201,7 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run, > if (is_write) > memcpy(run->mmio.data, data_buf, len); > vcpu->stat.mmio_exit_user++; > + vcpu->mmio_needed = 1; > run->exit_reason = KVM_EXIT_MMIO; > return 0; > } > Thanks for this. That's quite embarrassing. Out of curiosity, how was this spotted? Patch wise, I'd have a small preference for the following (untested) patch, as it keeps the mmio_needed accesses close together, making it easier to read (at least for me). What do you think? M. diff --git a/virt/kvm/arm/mmio.c b/virt/kvm/arm/mmio.c index a8a6a0c883f1..6af5c91337f2 100644 --- a/virt/kvm/arm/mmio.c +++ b/virt/kvm/arm/mmio.c @@ -86,6 +86,12 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) unsigned int len; int mask; + /* Detect an already handled MMIO return */ + if (unlikely(!vcpu->mmio_needed)) + return 0; + + vcpu->mmio_needed = 0; + if (!run->mmio.is_write) { len = run->mmio.len; if (len > sizeof(unsigned long)) @@ -188,6 +194,7 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run, run->mmio.is_write = is_write; run->mmio.phys_addr = fault_ipa; run->mmio.len = len; + vcpu->mmio_needed = 1; if (!ret) { /* We handled the access successfully in the kernel. */ -- Jazz is not dead, it just smells funny... _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm