Re: [PATCH v3 2/2] KVM: s390: Fix storage attributes migration with memory slots

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 31 Jan 2018 14:15:27 +0100
Janosch Frank <frankja@xxxxxxxxxxxxxxxxxx> wrote:

> On 30.01.2018 19:52, Claudio Imbrenda wrote:
> > This is a fix for several issues that were found in the original
> > code for storage attributes migration.
> > 
> > Now no bitmap is allocated to keep track of dirty storage
> > attributes; the extra bits of the per-memslot bitmap that are
> > always present anyway are now used for this purpose.
> > 
> > The code has also been refactored a little to improve readability.
> > 
> > Fixes: 190df4a212a ("KVM: s390: CMMA tracking, ESSA emulation,
> > migration mode") Fixes: 4036e3874a1 ("KVM: s390: ioctls to get and
> > set guest storage attributes") Signed-off-by: Claudio Imbrenda
> > <imbrenda@xxxxxxxxxxxxxxxxxx> ---  
> 
> So, this mostly makes sense to me, but is rather hard to read as a
> patch. Acked-by: Janosch Frank <frankja@xxxxxxxxxxxxxxxxxx>
> 
> [...]
> 
> > +
> > +static unsigned long kvm_s390_next_dirty_cmma(struct kvm *kvm,
> > +					      unsigned long cur)  
> 
> s/cur/cur_gfn/ ?

will fix
 
> > +{
> > +	struct kvm_memslots *slots = kvm_memslots(kvm);  
> 
> You might be able to pass *slots instead of *kvm.

true, I'll do it, even though now the gfn_to_memslot_approx will
diverge even more, but since it's an ugly hack anyway I guess we don't
care :)

> > +	struct kvm_memory_slot *ms;
> > +	unsigned long ofs;
> > +	int slotidx;
> > +
> > +	slotidx = gfn_to_memslot_approx(kvm, cur);
> > +	ms = slots->memslots + slotidx;
> > +	ofs = cur - ms->base_gfn;  
> 
> These 6 lines should be merged, if they don't get too long.

will fix

> > +
> > +	if (ms->base_gfn + ms->npages <= cur) {
> > +		slotidx--;
> > +		/* If we are above the highest slot, wrap around */
> > +		if (slotidx < 0)
> > +			slotidx = slots->used_slots - 1;
> > +
> > +		ms = slots->memslots + slotidx;
> > +		ofs = 0;
> > +	}
> > +	ofs = find_next_bit(kvm_shadow_dirty_bitmap(ms),
> > ms->npages, ofs);
> > +	while ((slotidx > 0) && (ofs >= ms->npages)) {
> > +		slotidx--;
> > +		ms = slots->memslots + slotidx;
> > +		ofs = find_next_bit(kvm_shadow_dirty_bitmap(ms),
> > ms->npages, 0);
> > +	}
> > +	return ms->base_gfn + ofs;
> > +}
> > +
> > +static int kvm_s390_get_cmma(struct kvm *kvm, struct
> > kvm_s390_cmma_log *args,
> > +			     u8 *res, unsigned long bufsize)  
> 
> So I guess this could possibly be a void function, but you want to
> preserve the symmetry in kvm_s390_get_cmma_bits. :)

yes

> > +{
> > +	unsigned long mem_end, cur_gfn, next_gfn, hva, pgstev;
> > +	struct kvm_memslots *slots = kvm_memslots(kvm);
> > +	struct kvm_memory_slot *ms;
> > +
> > +	cur_gfn = kvm_s390_next_dirty_cmma(kvm, args->start_gfn);
> > +	ms = gfn_to_memslot(kvm, cur_gfn);
> > +	args->count = 0;
> > +	args->start_gfn = cur_gfn;
> > +	if (!ms)
> > +		return 0;
> > +	next_gfn = kvm_s390_next_dirty_cmma(kvm, cur_gfn + 1);
> > +	mem_end = slots->memslots[0].base_gfn +
> > slots->memslots[0].npages; +
> > +	while (args->count < bufsize) {
> > +		hva = gfn_to_hva(kvm, cur_gfn);
> > +		if (kvm_is_error_hva(hva))
> > +			break;  
> 
> return

will fix all of them

> > +		/* decrement only if we actually flipped the bit
> > to 0 */
> > +		if (test_and_clear_bit(cur_gfn - ms->base_gfn,
> > kvm_shadow_dirty_bitmap(ms)))
> > +			atomic64_dec(&kvm->arch.cmma_dirty_pages);
> > +		if (get_pgste(kvm->mm, hva, &pgstev) < 0)
> > +			pgstev = 0;
> > +		/* save the value */
> > +		res[args->count++] = (pgstev >> 24) & 0x43;
> > +		/*
> > +		 * if the next bit is too far away, stop.
> > +		 * if we reached the previous "next", find the
> > next one
> > +		 */  
> 
> You could split that up onto the respective lines.

will fix

> > +		if (next_gfn > cur_gfn + KVM_S390_MAX_BIT_DISTANCE)
> > +			break;  
> 
> return
> 
> > +		if (cur_gfn == next_gfn)
> > +			next_gfn = kvm_s390_next_dirty_cmma(kvm,
> > cur_gfn + 1);
> > +		/* reached the end of memory or of the buffer,
> > stop */
> > +		if ((next_gfn >= mem_end) ||
> > +		    (next_gfn - args->start_gfn >= bufsize))
> > +			break;
> > +		cur_gfn++;  
> 
> /* Reached the end of the current memslot, let's take the next one. */

will add

> > +		if (cur_gfn - ms->base_gfn >= ms->npages) {
> > +			ms = gfn_to_memslot(kvm, cur_gfn);
> > +			if (!ms)
> > +				break;  
> 
> return
> 
> > +		}
> > +	}
> > +	return 0;
> > +}
> > +
> >  /*
> >   * This function searches for the next page with dirty CMMA
> > attributes, and
> >   * saves the attributes in the buffer up to either the end of the
> > buffer or @@ -1547,97 +1624,54 @@ static int
> > gfn_to_memslot_approx(struct kvm *kvm, gfn_t gfn) static int
> > kvm_s390_get_cmma_bits(struct kvm *kvm, struct kvm_s390_cmma_log
> > *args) {
> > -	struct kvm_s390_migration_state *s =
> > kvm->arch.migration_state;
> > -	unsigned long bufsize, hva, pgstev, i, next, cur;
> > -	int srcu_idx, peek, r = 0, rr;
> > -	u8 *res;
> > -
> > -	cur = args->start_gfn;
> > -	i = next = pgstev = 0;
> > +	unsigned long bufsize;
> > +	int srcu_idx, peek, ret;
> > +	u8 *values;
> > 
> >  	if (unlikely(!kvm->arch.use_cmma))
> >  		return -ENXIO;
> >  	/* Invalid/unsupported flags were specified */
> > -	if (args->flags & ~KVM_S390_CMMA_PEEK)
> > +	if (unlikely(args->flags & ~KVM_S390_CMMA_PEEK))  
> 
> Somebody still has to explain to me when (un)likely is useful and
> expected to be used. This feels like being sprinkled for good measure
> in a non performance-critical path, i.e. strange.

yeah I overdid it a little, I'll remove them because they're useless.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux