> -----Original Message----- > From: Shameerali Kolothum Thodi > Sent: 11 August 2021 09:48 > To: 'Will Deacon' <will@xxxxxxxxxx> > Cc: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx; kvmarm@xxxxxxxxxxxxxxxxxxxxx; > linux-kernel@xxxxxxxxxxxxxxx; maz@xxxxxxxxxx; catalin.marinas@xxxxxxx; > james.morse@xxxxxxx; julien.thierry.kdev@xxxxxxxxx; > suzuki.poulose@xxxxxxx; jean-philippe@xxxxxxxxxx; > Alexandru.Elisei@xxxxxxx; qperret@xxxxxxxxxx; Linuxarm > <linuxarm@xxxxxxxxxx> > Subject: RE: [PATCH v3 4/4] KVM: arm64: Clear active_vmids on vCPU > schedule out > > Hi Will, > > > -----Original Message----- > > From: Will Deacon [mailto:will@xxxxxxxxxx] > > Sent: 03 August 2021 16:31 > > To: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@xxxxxxxxxx> > > Cc: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx; kvmarm@xxxxxxxxxxxxxxxxxxxxx; > > linux-kernel@xxxxxxxxxxxxxxx; maz@xxxxxxxxxx; catalin.marinas@xxxxxxx; > > james.morse@xxxxxxx; julien.thierry.kdev@xxxxxxxxx; > > suzuki.poulose@xxxxxxx; jean-philippe@xxxxxxxxxx; > > Alexandru.Elisei@xxxxxxx; qperret@xxxxxxxxxx; Linuxarm > > <linuxarm@xxxxxxxxxx> > > Subject: Re: [PATCH v3 4/4] KVM: arm64: Clear active_vmids on vCPU > > schedule out > > [...] > > > I think we have to be really careful not to run into the "suspended > > animation" problem described in ae120d9edfe9 ("ARM: 7767/1: let the ASID > > allocator handle suspended animation") if we go down this road. > > > > Maybe something along the lines of: > > > > ROLLOVER > > > > * Take lock > > * Inc generation > > => This will force everybody down the slow path > > * Record active VMIDs > > * Broadcast TLBI > > => Only active VMIDs can be dirty > > => Reserve active VMIDs and mark as allocated > > > > VCPU SCHED IN > > > > * Set active VMID > > * Check generation > > * If mismatch then: > > * Take lock > > * Try to match a reserved VMID > > * If no reserved VMID, allocate new > > > > VCPU SCHED OUT > > > > * Clear active VMID > > > > but I'm not daft enough to think I got it right first time. I think it > > needs both implementing *and* modelling in TLA+ before we merge it! > > I attempted to implement the above algo as below. It seems to be > working in both 16-bit vmid and 4-bit vmid test setup. It is not :(. I did an extended, overnight test run and it fails. It looks to me in my below implementation there is no synchronization on setting the active VMID and a concurrent rollover. I will have another go. Thanks, Shameer Though I am > not quite sure this Is exactly what you had in mind above and covers > all corner cases. > > Please take a look and let me know. > (The diff below is against this v3 series) > > Thanks, > Shameer > > --->8<---- > > --- a/arch/arm64/kvm/vmid.c > +++ b/arch/arm64/kvm/vmid.c > @@ -43,7 +43,7 @@ static void flush_context(void) > bitmap_clear(vmid_map, 0, NUM_USER_VMIDS); > > for_each_possible_cpu(cpu) { > - vmid = atomic64_xchg_relaxed(&per_cpu(active_vmids, > cpu), 0); > + vmid = atomic64_read(&per_cpu(active_vmids, cpu)); > > /* Preserve reserved VMID */ > if (vmid == 0) > @@ -125,32 +125,17 @@ void kvm_arm_vmid_clear_active(void) > void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid) > { > unsigned long flags; > - u64 vmid, old_active_vmid; > + u64 vmid; > > vmid = atomic64_read(&kvm_vmid->id); > - > - /* > - * Please refer comments in check_and_switch_context() in > - * arch/arm64/mm/context.c. > - */ > - old_active_vmid = atomic64_read(this_cpu_ptr(&active_vmids)); > - if (old_active_vmid && vmid_gen_match(vmid) && > - atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_vmids), > - old_active_vmid, vmid)) > + if (vmid_gen_match(vmid)) { > + atomic64_set(this_cpu_ptr(&active_vmids), vmid); > return; > - > - raw_spin_lock_irqsave(&cpu_vmid_lock, flags); > - > - /* Check that our VMID belongs to the current generation. */ > - vmid = atomic64_read(&kvm_vmid->id); > - if (!vmid_gen_match(vmid)) { > - vmid = new_vmid(kvm_vmid); > - atomic64_set(&kvm_vmid->id, vmid); > } > > - > + raw_spin_lock_irqsave(&cpu_vmid_lock, flags); > + vmid = new_vmid(kvm_vmid); > + atomic64_set(&kvm_vmid->id, vmid); > atomic64_set(this_cpu_ptr(&active_vmids), vmid); > raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags); > } > --->8<---- > > > _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm