Re: [PATCH v2 3/4] KVM: Dirty memory tracking for performant checkpointing solutions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 04/01/2017 21:44, Cao, Lei wrote:
> Implement dirty list full forcing vcpus to exit.
> 
> Signed-off-by: Lei Cao <lei.cao@xxxxxxxxxxx>
> ---
>  arch/x86/include/asm/kvm_host.h |  7 +++++++
>  arch/x86/kvm/mmu.c              |  7 +++++++
>  arch/x86/kvm/vmx.c              |  7 +++++++
>  arch/x86/kvm/x86.c              | 10 ++++++++++
>  include/linux/kvm_host.h        |  1 +
>  include/uapi/linux/kvm.h        |  1 +
>  virt/kvm/kvm_main.c             | 36 ++++++++++++++++++++++++++++++++++++
>  7 files changed, 69 insertions(+)
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 5707129..e2f4cee 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6714,6 +6714,16 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>  		 */
>  		if (kvm_check_request(KVM_REQ_HV_STIMER, vcpu))
>  			kvm_hv_process_stimers(vcpu);
> +		if (kvm_check_request(KVM_REQ_EXIT_DIRTY_LOG_FULL, vcpu)) {
> +			vcpu->run->exit_reason = KVM_EXIT_DIRTY_LOG_FULL;
> +			r = -EINTR;
> +			if (vcpu->need_exit) {
> +				vcpu->need_exit = false;
> +				kvm_make_all_cpus_request(vcpu->kvm,
> +					KVM_REQ_EXIT_DIRTY_LOG_FULL);
> +			}
> +			goto out;
> +		}
>  	}
>  
>  	/*
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 7a85b30..b7fedeb 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -283,6 +283,7 @@ struct kvm_vcpu {
>  	struct dentry *debugfs_dentry;
>  #ifdef KVM_DIRTY_LOG_PAGE_OFFSET
>  	struct gfn_list_t *dirty_logs;
> +	bool need_exit;
>  #endif
>  };
>  
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 05332de..bacb8db 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -205,6 +205,7 @@ struct kvm_hyperv_exit {
>  #define KVM_EXIT_S390_STSI        25
>  #define KVM_EXIT_IOAPIC_EOI       26
>  #define KVM_EXIT_HYPERV           27
> +#define KVM_EXIT_DIRTY_LOG_FULL   28
>  
>  /* For KVM_EXIT_INTERNAL_ERROR */
>  /* Emulate instruction failed. */
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index bff980c..00d7989 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -270,6 +270,7 @@ int kvm_vcpu_init(struct kvm_vcpu *vcpu, struct kvm *kvm, unsigned id)
>  		}
>  		vcpu->dirty_logs = page_address(page);
>  	}
> +	vcpu->need_exit = false;
>  #endif
>  
>  	kvm_vcpu_set_in_spin_loop(vcpu, false);
> @@ -3030,6 +3031,29 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
>  }
>  
>  #ifdef KVM_DIRTY_LOG_PAGE_OFFSET
> +static void kvm_mt_dirty_log_full(struct kvm *kvm, struct kvm_vcpu *vcpu)
> +{
> +	/*
> +	 * Request vcpu exits, but if interrupts are disabled, we have
> +	 * to defer the requests because smp_call_xxx may deadlock when
> +	 * called that way.
> +	 */
> +	if (vcpu && irqs_disabled()) {
> +		kvm_make_request(KVM_REQ_EXIT_DIRTY_LOG_FULL, vcpu);
> +		vcpu->need_exit = true;
> +	} else {
> +		WARN_ON(irqs_disabled());
> +		kvm_make_all_cpus_request(kvm,
> +					  KVM_REQ_EXIT_DIRTY_LOG_FULL);
> +	}

Please add a tracepoint here.  Also, why exit all CPUs if the VCPU log
is full?  Let userspace sort that out instead; this also removes the
need for vcpu->need_exit.

I haven't checked that you cannot reach the case of !vcpu &&
irqs_disabled().  Please document your reasoning here.

> +}
> +
> +/*
> + * estimated number of pages being dirtied during vcpu exit, not counting
> + * hardware dirty log (PML) flush
> + */
> +#define KVM_MT_DIRTY_PAGE_NUM_EXTRA 128
> +
>  void kvm_mt_mark_page_dirty(struct kvm *kvm, struct kvm_memory_slot *slot,
>  	struct kvm_vcpu *vcpu, gfn_t gfn)
>  {
> @@ -3037,6 +3061,7 @@ void kvm_mt_mark_page_dirty(struct kvm *kvm, struct kvm_memory_slot *slot,
>  	int slot_id;
>  	u32 as_id = 0;
>  	u64 offset;
> +	u32 extra = KVM_MT_DIRTY_PAGE_NUM_EXTRA;
>  
>  	if (!slot || !slot->dirty_bitmap || !kvm->dirty_log_size)
>  		return;
> @@ -3068,6 +3093,17 @@ void kvm_mt_mark_page_dirty(struct kvm *kvm, struct kvm_memory_slot *slot,
>  	gfnlist->dirty_gfns[gfnlist->dirty_index].offset = offset;
>  	smp_wmb();
>  	gfnlist->dirty_index++;
> +
> +	/*
> +	 * more pages will be dirtied during vcpu exit, e.g. pml log
> +	 * being flushed. So allow some buffer space.
> +	 */
> +	if (vcpu)
> +		extra += kvm_mt_cpu_dirty_log_size();

Please make "extra" a field in struct kvm, so that it is computed just
once.  Also, how many pages will be dirtied?  Will they only be in the
global log or also in the per-vcpu logs?  Perhaps there should be
separate "extra"s for the global and vcpu logs.

Thanks,

Paolo

> +	if (gfnlist->dirty_index == (kvm->max_dirty_logs - extra))
> +		kvm_mt_dirty_log_full(kvm, vcpu);
> +
>  	if (!vcpu)
>  		spin_unlock(&kvm->dirty_log_lock);
>  }
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux