Hi Paolo, > -----Original Message----- > From: Paolo Bonzini [mailto:pbonzini@xxxxxxxxxx] > Sent: Tuesday, February 18, 2020 7:40 PM > To: Zhoujian (jay) <jianjay.zhou@xxxxxxxxxx>; kvm@xxxxxxxxxxxxxxx > Cc: peterx@xxxxxxxxxx; wangxin (U) <wangxinxin.wang@xxxxxxxxxx>; > linfeng (M) <linfeng23@xxxxxxxxxx>; Huangweidong (C) > <weidong.huang@xxxxxxxxxx> > Subject: Re: [PATCH] KVM: x86: enable dirty log gradually in small chunks > > On 18/02/20 12:00, Jay Zhou wrote: > > It could take kvm->mmu_lock for an extended period of time when > > enabling dirty log for the first time. The main cost is to clear all > > the D-bits of last level SPTEs. This situation can benefit from manual > > dirty log protect as well, which can reduce the mmu_lock time taken. > > The sequence is like this: > > > > 1. Set all the bits of the first dirty bitmap to 1 when enabling > > dirty log for the first time > > 2. Only write protect the huge pages > > 3. KVM_GET_DIRTY_LOG returns the dirty bitmap info 4. > > KVM_CLEAR_DIRTY_LOG will clear D-bit for each of the leaf level > > SPTEs gradually in small chunks > > > > Under the Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz environment, I did > > some tests with a 128G windows VM and counted the time taken of > > memory_global_dirty_log_start, here is the numbers: > > > > VM Size Before After optimization > > 128G 460ms 10ms > > This is a good idea, but could userspace expect the bitmap to be 0 for pages > that haven't been touched? The userspace gets the bitmap information only from the kernel side. It depends on the kernel side to distinguish whether the pages have been touched I think, which using the rmap to traverse for now. I haven't the other ideas yet, :-( But even though the userspace gets 1 for pages that haven't been touched, these pages will be filtered out too in the kernel space KVM_CLEAR_DIRTY_LOG ioctl path, since the rmap does not exist I think. > I think this should be added as a new bit to the > KVM_ENABLE_CAP for KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2. That is: > > - in kvm_vm_ioctl_check_extension_generic, return 3 for > KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 (better: define two constants > KVM_DIRTY_LOG_MANUAL_PROTECT as 1 and > KVM_DIRTY_LOG_INITIALLY_SET as 2). > > - in kvm_vm_ioctl_enable_cap_generic, allow bit 0 and bit 1 for cap->args[0] > > - in kvm_vm_ioctl_enable_cap_generic, check "if > (!(kvm->manual_dirty_log_protect & KVM_DIRTY_LOG_INITIALLY_SET))". Thanks for the details! I'll add them in the next version. Regards, Jay Zhou > > Thanks, > > Paolo > > > > Signed-off-by: Jay Zhou <jianjay.zhou@xxxxxxxxxx> > > --- > > arch/x86/kvm/vmx/vmx.c | 5 +++++ > > include/linux/kvm_host.h | 5 +++++ > > virt/kvm/kvm_main.c | 10 ++++++++-- > > 3 files changed, 18 insertions(+), 2 deletions(-) > > > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index > > 3be25ec..a8d64f6 100644 > > --- a/arch/x86/kvm/vmx/vmx.c > > +++ b/arch/x86/kvm/vmx/vmx.c > > @@ -7201,7 +7201,12 @@ static void vmx_sched_in(struct kvm_vcpu *vcpu, > > int cpu) static void vmx_slot_enable_log_dirty(struct kvm *kvm, > > struct kvm_memory_slot *slot) { > > +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT > > + if (!kvm->manual_dirty_log_protect) > > + kvm_mmu_slot_leaf_clear_dirty(kvm, slot); #else > > kvm_mmu_slot_leaf_clear_dirty(kvm, slot); > > +#endif > > kvm_mmu_slot_largepage_remove_write_access(kvm, slot); } > > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index > > e89eb67..fd149b0 100644 > > --- a/include/linux/kvm_host.h > > +++ b/include/linux/kvm_host.h > > @@ -360,6 +360,11 @@ static inline unsigned long > *kvm_second_dirty_bitmap(struct kvm_memory_slot *mem > > return memslot->dirty_bitmap + len / sizeof(*memslot->dirty_bitmap); > > } > > > > +static inline void kvm_set_first_dirty_bitmap(struct kvm_memory_slot > > +*memslot) { > > + bitmap_set(memslot->dirty_bitmap, 0, memslot->npages); } > > + > > struct kvm_s390_adapter_int { > > u64 ind_addr; > > u64 summary_addr; > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index > > 70f03ce..08565ed 100644 > > --- a/virt/kvm/kvm_main.c > > +++ b/virt/kvm/kvm_main.c > > @@ -862,7 +862,8 @@ static int kvm_vm_release(struct inode *inode, > struct file *filp) > > * Allocation size is twice as large as the actual dirty bitmap size. > > * See x86's kvm_vm_ioctl_get_dirty_log() why this is needed. > > */ > > -static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot) > > +static int kvm_create_dirty_bitmap(struct kvm *kvm, > > + struct kvm_memory_slot *memslot) > > { > > unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot); > > > > @@ -870,6 +871,11 @@ static int kvm_create_dirty_bitmap(struct > kvm_memory_slot *memslot) > > if (!memslot->dirty_bitmap) > > return -ENOMEM; > > > > +#if CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT > > + if (kvm->manual_dirty_log_protect) > > + kvm_set_first_dirty_bitmap(memslot); > > +#endif > > + > > return 0; > > } > > > > @@ -1094,7 +1100,7 @@ int __kvm_set_memory_region(struct kvm *kvm, > > > > /* Allocate page dirty bitmap if needed */ > > if ((new.flags & KVM_MEM_LOG_DIRTY_PAGES) && !new.dirty_bitmap) { > > - if (kvm_create_dirty_bitmap(&new) < 0) > > + if (kvm_create_dirty_bitmap(kvm, &new) < 0) > > goto out_free; > > } > > > >