Hi Will, On 9/30/20 11:24 AM, Will Deacon wrote: > From: Will Deacon <willdeacon@xxxxxxxxxx> > > If a change in the MMU notifier sequence number forces user_mem_abort() > to return early when attempting to handle a stage-2 fault, we return > uninitialised stack to kvm_handle_guest_abort(), which could potentially > result in the injection of an external abort into the guest or a spurious > return to userspace. Neither or these are what we want to do. > > Initialise 'ret' to 0 in user_mem_abort() so that bailing due to a > change in the MMU notrifier sequence number is treated as though the > fault was handled. > > Cc: Gavin Shan <gshan@xxxxxxxxxx> > Cc: Alexandru Elisei <alexandru.elisei@xxxxxxx> > Reported-by: kernel test robot <lkp@xxxxxxxxx> > Reported-by: Dan Carpenter <dan.carpenter@xxxxxxxxxx> > Signed-off-by: Will Deacon <will@xxxxxxxxxx> > --- > > Applies on top of kvmarm/next. > > arch/arm64/kvm/mmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index c5c26a9cb85b..a816cb8e619b 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -742,7 +742,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > struct kvm_memory_slot *memslot, unsigned long hva, > unsigned long fault_status) > { > - int ret; > + int ret = 0; > bool write_fault, writable, force_pte = false; > bool exec_fault; > bool device = false; This matches the current behavior of user_mem_abort(), where ret = 0 from the call to kvm_mmu_topup_memory_cache(), which was made conditional by the EL2 page table rewrite. It makes sense to me - we return to the guest and take the fault again until the changes to the translation tables have been executed (mmu_notifier_seq remains the same and mmu_notifier_count == 0): Reviewed-by: Alexandru Elisei <alexandru.elisei@xxxxxxx> _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm