Re: [PATCH 2/6] KVM: Add KVM_CAP_DIRTY_LOG_RING_ORDERED capability and config option

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 23, 2022 at 03:28:34PM +0100, Marc Zyngier wrote:
> On Thu, 22 Sep 2022 22:48:19 +0100,
> Peter Xu <peterx@xxxxxxxxxx> wrote:
> > 
> > On Thu, Sep 22, 2022 at 06:01:29PM +0100, Marc Zyngier wrote:
> > > In order to differenciate between architectures that require no extra
> > > synchronisation when accessing the dirty ring and those who do,
> > > add a new capability (KVM_CAP_DIRTY_LOG_RING_ORDERED) that identify
> > > the latter sort. TSO architectures can obviously advertise both, while
> > > relaxed architectures most only advertise the ORDERED version.
> > > 
> > > Suggested-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
> > > Signed-off-by: Marc Zyngier <maz@xxxxxxxxxx>
> > > ---
> > >  include/linux/kvm_dirty_ring.h |  6 +++---
> > >  include/uapi/linux/kvm.h       |  1 +
> > >  virt/kvm/Kconfig               | 14 ++++++++++++++
> > >  virt/kvm/Makefile.kvm          |  2 +-
> > >  virt/kvm/kvm_main.c            | 11 +++++++++--
> > >  5 files changed, 28 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/include/linux/kvm_dirty_ring.h b/include/linux/kvm_dirty_ring.h
> > > index 906f899813dc..7a0c90ae9a3f 100644
> > > --- a/include/linux/kvm_dirty_ring.h
> > > +++ b/include/linux/kvm_dirty_ring.h
> > > @@ -27,7 +27,7 @@ struct kvm_dirty_ring {
> > >  	int index;
> > >  };
> > >  
> > > -#ifndef CONFIG_HAVE_KVM_DIRTY_RING
> > > +#ifndef CONFIG_HAVE_KVM_DIRTY_LOG
> > 
> > s/LOG/LOG_RING/ according to the commit message? Or the name seems too
> > generic.
> 
> The commit message talks about the capability, while the above is the
> config option. If you find the names inappropriate, feel free to
> suggest alternatives (for all I care, they could be called FOO, BAR
> and BAZ).

The existing name from David looks better than the new one.. to me.

> 
> > Pure question to ask: is it required to have a new cap just for the
> > ordering?  IIUC if x86 was the only supported anyway before, it means all
> > released old kvm binaries are always safe even without the strict
> > orderings.  As long as we rework all the memory ordering bits before
> > declaring support of yet another arch, we're good.  Or am I wrong?
> 
> Someone will show up with an old userspace which probes for the sole
> existing capability, and things start failing subtly. It is quite
> likely that the userspace code is built for all architectures,

I didn't quite follow here.  Since both kvm/qemu dirty ring was only
supported on x86, I don't see the risk.

Assuming we've the old binary.

If to run on old kernel, it'll work like before.

If to run on new kernel, the kernel will behave stricter on memory barriers
but should still be compatible with the old behavior (not vice versa, so
I'll understand if we're loosing the ordering, but we're not..).

Any further elaboration would be greatly helpful.

Thanks,

> and we
> want to make sure that userspace actively buys into the new ordering
> requirements. A simple way to do this is to expose a new capability,
> making the new requirement obvious. Architectures with relaxed
> ordering semantics will only implement the new one, while x86 will
> implement both.

-- 
Peter Xu

_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux