RE: [PATCH] KVM: x86: enable dirty log gradually in small chunks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sean,

> -----Original Message-----
> From: Sean Christopherson [mailto:sean.j.christopherson@xxxxxxxxx]
> Sent: Wednesday, February 19, 2020 5:23 AM
> To: Zhoujian (jay) <jianjay.zhou@xxxxxxxxxx>
> Cc: kvm@xxxxxxxxxxxxxxx; pbonzini@xxxxxxxxxx; peterx@xxxxxxxxxx;
> wangxin (U) <wangxinxin.wang@xxxxxxxxxx>; linfeng (M)
> <linfeng23@xxxxxxxxxx>; Huangweidong (C) <weidong.huang@xxxxxxxxxx>
> Subject: Re: [PATCH] KVM: x86: enable dirty log gradually in small chunks
> 
> On Tue, Feb 18, 2020 at 07:00:13PM +0800, Jay Zhou wrote:
> > It could take kvm->mmu_lock for an extended period of time when
> > enabling dirty log for the first time. The main cost is to clear all
> > the D-bits of last level SPTEs. This situation can benefit from manual
> > dirty log protect as well, which can reduce the mmu_lock time taken.
> > The sequence is like this:
> >
> > 1. Set all the bits of the first dirty bitmap to 1 when enabling
> >    dirty log for the first time
> > 2. Only write protect the huge pages
> > 3. KVM_GET_DIRTY_LOG returns the dirty bitmap info 4.
> > KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level
> >    SPTEs gradually in small chunks
> >
> > Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment, I did
> > some tests with a 128G windows VM and counted the time taken of
> > memory_global_dirty_log_start, here is the numbers:
> >
> > VM Size        Before    After optimization
> > 128G           460ms     10ms
> >
> > Signed-off-by: Jay Zhou <jianjay.zhou@xxxxxxxxxx>
> > ---
> >  arch/x86/kvm/vmx/vmx.c   |  5 +++++
> >  include/linux/kvm_host.h |  5 +++++
> >  virt/kvm/kvm_main.c      | 10 ++++++++--
> >  3 files changed, 18 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index
> > 3be25ec..a8d64f6 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -7201,7 +7201,12 @@ static void vmx_sched_in(struct kvm_vcpu *vcpu,
> > int cpu)  static void vmx_slot_enable_log_dirty(struct kvm *kvm,
> >  				     struct kvm_memory_slot *slot)  {
> > +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> > +	if (!kvm->manual_dirty_log_protect)
> > +		kvm_mmu_slot_leaf_clear_dirty(kvm, slot); #else
> >  	kvm_mmu_slot_leaf_clear_dirty(kvm, slot);
> > +#endif
> 
> The ifdef is unnecessary, this is in VMX (x86) code, i.e.
> CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT is guaranteed to be
> defined.

I agree.

> 
> >  	kvm_mmu_slot_largepage_remove_write_access(kvm, slot);  }
> >
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index
> > e89eb67..fd149b0 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -360,6 +360,11 @@ static inline unsigned long
> *kvm_second_dirty_bitmap(struct kvm_memory_slot *mem
> >  	return memslot->dirty_bitmap + len / sizeof(*memslot->dirty_bitmap);
> > }
> >
> > +static inline void kvm_set_first_dirty_bitmap(struct kvm_memory_slot
> > +*memslot) {
> > +	bitmap_set(memslot->dirty_bitmap, 0, memslot->npages); }
> 
> I'd prefer this be open coded with a comment, e.g. "first" is misleading because
> it's really "initial dirty bitmap for this memslot after enabling dirty logging".

kvm_create_dirty_bitmap allocates twice size as large as the actual dirty bitmap
size, and there is kvm_second_dirty_bitmap to get the second part of the map,
this is the reason why I use first_dirty_bitmap here, which means the first part
(not first time) of the dirty bitmap.

I'll try to be more clear if this is misleading...

> > +
> >  struct kvm_s390_adapter_int {
> >  	u64 ind_addr;
> >  	u64 summary_addr;
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index
> > 70f03ce..08565ed 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -862,7 +862,8 @@ static int kvm_vm_release(struct inode *inode,
> struct file *filp)
> >   * Allocation size is twice as large as the actual dirty bitmap size.
> >   * See x86's kvm_vm_ioctl_get_dirty_log() why this is needed.
> >   */
> > -static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
> > +static int kvm_create_dirty_bitmap(struct kvm *kvm,
> > +				struct kvm_memory_slot *memslot)
> >  {
> >  	unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot);
> >
> > @@ -870,6 +871,11 @@ static int kvm_create_dirty_bitmap(struct
> kvm_memory_slot *memslot)
> >  	if (!memslot->dirty_bitmap)
> >  		return -ENOMEM;
> >
> > +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT
> 
> The ifdef is unnecessary, manual_dirty_log_protect always exists and is
> guaranteed to be false if
> CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=n.  This isn't exactly a
> hot path so saving the uop isn't worth the #ifdef.

After rereading the code, I think you're right.

> 
> > +	if (kvm->manual_dirty_log_protect)
> > +		kvm_set_first_dirty_bitmap(memslot);
> > +#endif
> > +
> >  	return 0;
> >  }
> >
> > @@ -1094,7 +1100,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
> >
> >  	/* Allocate page dirty bitmap if needed */
> >  	if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
> > -		if (kvm_create_dirty_bitmap(&new) < 0)
> > +		if (kvm_create_dirty_bitmap(kvm, &new) < 0)
> 
> Rather than pass @kvm, what about doing bitmap_set() in
> __kvm_set_memory_region() and
> s/kvm_create_dirty_bitmap/kvm_alloc_dirty_bitmap to make it clear that the
> helper is only responsible for allocation?  And opportunistically drop the
> superfluous "< 0", e.g.
> 
> 	if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) {
> 		if (kvm_alloc_dirty_bitmap(&new))
> 			goto out_free;
> 
> 		/*
> 		 * WORDS!
> 		 */
> 		if (kvm->manual_dirty_log_protect)
> 			bitmap_set(memslot->dirty_bitmap, 0, memslot->npages);
> 	}

Seems to be more clear, thanks for the suggestion.

Regards,
Jay Zhou

> >  			goto out_free;
> >  	}
> >
> > --
> > 1.8.3.1
> >
> >



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux