On Tue, Aug 03, 2021 at 12:55:25PM +0000, Shameerali Kolothum Thodi wrote: > > > diff --git a/arch/arm64/kvm/vmid.c b/arch/arm64/kvm/vmid.c > > > index 5584e84aed95..5fd51f5445c1 100644 > > > --- a/arch/arm64/kvm/vmid.c > > > +++ b/arch/arm64/kvm/vmid.c > > > @@ -116,6 +116,12 @@ static u64 new_vmid(struct kvm_vmid > > *kvm_vmid) > > > return idx2vmid(vmid) | generation; > > > } > > > > > > +/* Call with preemption disabled */ > > > +void kvm_arm_vmid_clear_active(void) > > > +{ > > > + atomic64_set(this_cpu_ptr(&active_vmids), 0); > > > +} > > > > I think this is very broken, as it will force everybody to take the > > slow-path when they see an active_vmid of 0. > > Yes. I have seen that happening in my test setup. Why didn't you say so?! > > It also doesn't solve the issue I mentioned before, as an active_vmid of 0 > > means that the reserved vmid is preserved. > > > > Needs more thought... > > How about we clear all the active_vmids in kvm_arch_free_vm() if it > matches the kvm_vmid->id ? But we may have to hold the lock > there I think we have to be really careful not to run into the "suspended animation" problem described in ae120d9edfe9 ("ARM: 7767/1: let the ASID allocator handle suspended animation") if we go down this road. Maybe something along the lines of: ROLLOVER * Take lock * Inc generation => This will force everybody down the slow path * Record active VMIDs * Broadcast TLBI => Only active VMIDs can be dirty => Reserve active VMIDs and mark as allocated VCPU SCHED IN * Set active VMID * Check generation * If mismatch then: * Take lock * Try to match a reserved VMID * If no reserved VMID, allocate new VCPU SCHED OUT * Clear active VMID but I'm not daft enough to think I got it right first time. I think it needs both implementing *and* modelling in TLA+ before we merge it! Will _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm