Re: [PATCH v2 1/2] KVM: s390x: some utility functions for migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 01/18/2018 01:56 PM, Claudio Imbrenda wrote:
> These are some utilty functions that will be used later on for storage
> attributes migration.
> 
> Signed-off-by: Claudio Imbrenda <imbrenda@xxxxxxxxxxxxxxxxxx>
> ---
>  arch/s390/kvm/kvm-s390.c | 32 ++++++++++++++++++++++++++++++++
>  arch/s390/kvm/kvm-s390.h |  6 ++++++
>  2 files changed, 38 insertions(+)
> 
> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
> index 6f17031..5a69334 100644
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -1512,6 +1512,38 @@ static long kvm_s390_set_skeys(struct kvm *kvm, struct kvm_s390_skeys *args)
>  #define KVM_S390_CMMA_SIZE_MAX ((u32)KVM_S390_SKEYS_MAX)
> 
>  /*
> + * Similar to gfn_to_memslot, but returns a memslot also when the address falls

gfn_to_memslot returns a memslot, this returns an int?

> + * in a hole. In that case a memslot near the hole is returned.

Can you please clarify that statement? Will it be the slot that is closest
in terms of bytes or what? Maybe also provide a use case and an example
> + */
> +static int gfn_to_memslot_approx(struct kvm *kvm, gfn_t gfn)
> +{
> +	struct kvm_memslots *slots = kvm_memslots(kvm);
> +	int start = 0, end = slots->used_slots;
> +	int slot = atomic_read(&slots->lru_slot);
> +	struct kvm_memory_slot *memslots = slots->memslots;
> +
> +	if (gfn >= memslots[slot].base_gfn &&
> +	    gfn < memslots[slot].base_gfn + memslots[slot].npages)
> +		return slot;
> +
> +	while (start < end) {
> +		slot = start + (end - start) / 2;
> +
> +		if (gfn >= memslots[slot].base_gfn)
> +			end = slot;
> +		else
> +			start = slot + 1;
> +	}
> +
> +	if (gfn >= memslots[start].base_gfn &&
> +	    gfn < memslots[start].base_gfn + memslots[start].npages) {
> +		atomic_set(&slots->lru_slot, start);
> +	}

Another question for Paolo/Radim. In case we need such function, would
it be better in common code (kvm_main.c)
> +
> +	return start;


> +}
> +
> +/*
>   * This function searches for the next page with dirty CMMA attributes, and
>   * saves the attributes in the buffer up to either the end of the buffer or
>   * until a block of at least KVM_S390_MAX_BIT_DISTANCE clean bits is found;
> diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
> index 04a3e91..b86c1ad 100644
> --- a/arch/s390/kvm/kvm-s390.h
> +++ b/arch/s390/kvm/kvm-s390.h
> @@ -183,6 +183,12 @@ static inline int kvm_s390_user_cpu_state_ctrl(struct kvm *kvm)
>  	return kvm->arch.user_cpu_state_ctrl != 0;
>  }
> 
> +static inline unsigned long *cmma_bitmap(struct kvm_memory_slot *slot)
> +{
> +	return slot->dirty_bitmap +
> +	       kvm_dirty_bitmap_bytes(slot) / sizeof(*slot->dirty_bitmap);
> +}
> +

Hmmm, in virt/kvm/kvm_main.c we do have in kvm_get_dirty_log_protect


        n = kvm_dirty_bitmap_bytes(memslot);

        dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long);


Does it make sense to have that helper common  (and call it maybe

unsigned long *kvm_get_shadow_dirty(struct kvm_memory_slot *slot)




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux