Re: [PATCH v2 1/2] KVM: s390x: some utility functions for migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 22 Jan 2018 10:08:47 +0100
Christian Borntraeger <borntraeger@xxxxxxxxxx> wrote:

[...]

> >  /*
> > + * Similar to gfn_to_memslot, but returns a memslot also when the
> > address falls  
> 
> gfn_to_memslot returns a memslot, this returns an int?

true, I have to update the comment

> > + * in a hole. In that case a memslot near the hole is returned.  
> 
> Can you please clarify that statement? Will it be the slot that is
> closest in terms of bytes or what? Maybe also provide a use case and
> an example

I think that just any of the two memslots (before and after the hole)
can be returned, not necessarily the closest one. The logic that uses
this function takes this into account. Maybe I can change the
description to "the index of any of the memoslots bordering the hole is
returned"

> > + */
> > +static int gfn_to_memslot_approx(struct kvm *kvm, gfn_t gfn)
> > +{
> > +	struct kvm_memslots *slots = kvm_memslots(kvm);
> > +	int start = 0, end = slots->used_slots;
> > +	int slot = atomic_read(&slots->lru_slot);
> > +	struct kvm_memory_slot *memslots = slots->memslots;
> > +
> > +	if (gfn >= memslots[slot].base_gfn &&
> > +	    gfn < memslots[slot].base_gfn + memslots[slot].npages)
> > +		return slot;
> > +
> > +	while (start < end) {
> > +		slot = start + (end - start) / 2;
> > +
> > +		if (gfn >= memslots[slot].base_gfn)
> > +			end = slot;
> > +		else
> > +			start = slot + 1;
> > +	}
> > +
> > +	if (gfn >= memslots[start].base_gfn &&
> > +	    gfn < memslots[start].base_gfn +
> > memslots[start].npages) {
> > +		atomic_set(&slots->lru_slot, start);
> > +	}  
> 
> Another question for Paolo/Radim. In case we need such function, would
> it be better in common code (kvm_main.c)
> > +
> > +	return start;  
> 
> 
> > +}
> > +
> > +/*
> >   * This function searches for the next page with dirty CMMA
> > attributes, and
> >   * saves the attributes in the buffer up to either the end of the
> > buffer or
> >   * until a block of at least KVM_S390_MAX_BIT_DISTANCE clean bits
> > is found; diff --git a/arch/s390/kvm/kvm-s390.h
> > b/arch/s390/kvm/kvm-s390.h index 04a3e91..b86c1ad 100644
> > --- a/arch/s390/kvm/kvm-s390.h
> > +++ b/arch/s390/kvm/kvm-s390.h
> > @@ -183,6 +183,12 @@ static inline int
> > kvm_s390_user_cpu_state_ctrl(struct kvm *kvm) return
> > kvm->arch.user_cpu_state_ctrl != 0; }
> > 
> > +static inline unsigned long *cmma_bitmap(struct kvm_memory_slot
> > *slot) +{
> > +	return slot->dirty_bitmap +
> > +	       kvm_dirty_bitmap_bytes(slot) /
> > sizeof(*slot->dirty_bitmap); +}
> > +  
> 
> Hmmm, in virt/kvm/kvm_main.c we do have in kvm_get_dirty_log_protect
> 
> 
>         n = kvm_dirty_bitmap_bytes(memslot);
> 
>         dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long);
> 
> 
> Does it make sense to have that helper common  (and call it maybe
> 
> unsigned long *kvm_get_shadow_dirty(struct kvm_memory_slot *slot)

I hadn't noticed it; it surely makes sense to share the code.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux