On Fri, 22 Feb 2019 11:42:46 +0000 Julien Grall <julien.grall@xxxxxxx> wrote: > Hi Marc, > > On 22/02/2019 09:18, Marc Zyngier wrote: > > On Thu, 21 Feb 2019 11:02:56 +0000 > > Julien Grall <Julien.Grall@xxxxxxx> wrote: > > > > Hi Julien, > > > >> Hi Christoffer, > >> > >> On 24/01/2019 14:00, Christoffer Dall wrote: > >>> Note that to avoid mapping the kvm_vmid_bits variable into hyp, we > >>> simply forego the masking of the vmid value in kvm_get_vttbr and rely on > >>> update_vmid to always assign a valid vmid value (within the supported > >>> range). > >> > >> [...] > >> > >>> - kvm->arch.vmid = kvm_next_vmid; > >>> + vmid->vmid = kvm_next_vmid; > >>> kvm_next_vmid++; > >>> - kvm_next_vmid &= (1 << kvm_vmid_bits) - 1; > >>> - > >>> - /* update vttbr to be used with the new vmid */ > >>> - pgd_phys = virt_to_phys(kvm->arch.pgd); > >>> - BUG_ON(pgd_phys & ~kvm_vttbr_baddr_mask(kvm)); > >>> - vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits); > >>> - kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid | cnp; > >>> + kvm_next_vmid &= (1 << kvm_get_vmid_bits()) - 1; > >> > >> The arm64 version of kvm_get_vmid_bits does not look cheap. Indeed it required > >> to read the sanitized value of SYS_ID_AA64MMFR1_EL1 that is implemented using > >> the function bsearch. > >> > >> So wouldn't it be better to keep kvm_vmid_bits variable for use in update_vttbr()? > > > > How often does this happen? Can you measure this overhead at all? > > > > My understanding is that we hit this path on rollover only, having IPIed > > all CPUs and invalidated all TLBs. I seriously doubt you can observe > > any sort of overhead at all, given that it is so incredibly rare. But > > feel free to prove me wrong! > > That would happen on roll-over and the first time you allocate VMID for the VM. > > I am planning to run some test with 3-bit VMIDs and provide them next week. Sure, but who implements 3 bit VMIDs? I'm only interested in performance on real HW, and the minimal implementation allowed is 8bits. So don't bother testing this with such contrived conditions. Test it for real, on a system that can run enough stuff concurrently to quickly exhaust its VMID space and provoke rollovers. Alternatively, measure the time it takes to create a single VM that exits immediately. On its own, that'd be a useful unit test for extremely short-lived VMs. Thanks, M. -- Without deviation from the norm, progress is not possible.