On Thu, 17 Jun 2021 14:24:25 +0100, Steven Price <steven.price@xxxxxxx> wrote: > > On 17/06/2021 14:15, Marc Zyngier wrote: > > On Thu, 17 Jun 2021 13:13:22 +0100, > > Catalin Marinas <catalin.marinas@xxxxxxx> wrote: > >> > >> On Mon, Jun 14, 2021 at 10:05:18AM +0100, Steven Price wrote: > >>> I realise there are still open questions[1] around the performance of > >>> this series (the 'big lock', tag_sync_lock, introduced in the first > >>> patch). But there should be no impact on non-MTE workloads and until we > >>> get real MTE-enabled hardware it's hard to know whether there is a need > >>> for something more sophisticated or not. Peter Collingbourne's patch[3] > >>> to clear the tags at page allocation time should hide more of the impact > >>> for non-VM cases. So the remaining concern is around VM startup which > >>> could be effectively serialised through the lock. > >> [...] > >>> [1]: https://lore.kernel.org/r/874ke7z3ng.wl-maz%40kernel.org > >> > >> Start-up, VM resume, migration could be affected by this lock, basically > >> any time you fault a page into the guest. As you said, for now it should > >> be fine as long as the hardware doesn't support MTE or qemu doesn't > >> enable MTE in guests. But the problem won't go away. > > > > Indeed. And I find it odd to say "it's not a problem, we don't have > > any HW available". By this token, why should we merge this work the > > first place, or any of the MTE work that has gone into the kernel over > > the past years? > > > >> We have a partial solution with an array of locks to mitigate against > >> this but there's still the question of whether we should actually bother > >> for something that's unlikely to happen in practice: MAP_SHARED memory > >> in guests (ignoring the stage 1 case for now). > >> > >> If MAP_SHARED in guests is not a realistic use-case, we have the vma in > >> user_mem_abort() and if the VM_SHARED flag is set together with MTE > >> enabled for guests, we can reject the mapping. > > > > That's a reasonable approach. I wonder whether we could do that right > > at the point where the memslot is associated with the VM, like this: > > > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > > index a36a2e3082d8..ebd3b3224386 100644 > > --- a/arch/arm64/kvm/mmu.c > > +++ b/arch/arm64/kvm/mmu.c > > @@ -1376,6 +1376,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, > > if (!vma) > > break; > > > > + if (kvm_has_mte(kvm) && vma->vm_flags & VM_SHARED) > > + return -EINVAL; > > + > > /* > > * Take the intersection of this VMA with the memory region > > */ > > > > which takes the problem out of the fault path altogether? We document > > the restriction and move on. With that, we can use a non-locking > > version of mte_sync_page_tags(). > > Does this deal with the case where the VMAs are changed after the > memslot is created? While we can do the check here to give the VMM a > heads-up if it gets it wrong, I think we also need it in > user_mem_abort() to deal with a VMM which mmap()s over the VA of the > memslot. Or am I missing something? No, you're right. I wish the memslot API wasn't so lax... Anyway, even a VMA flag check in user_mem_abort() will be cheaper than this new BKL. > But if everyone is happy with the restriction (just for KVM) of not > allowing MTE+VM_SHARED then that sounds like a good way forward. Definitely works for me. M. -- Without deviation from the norm, progress is not possible. _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm