Re: [PATCH v1 1/2] KVM: s390x: some utility functions for migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 01/16/2018 07:03 PM, David Hildenbrand wrote:
> On 15.01.2018 18:03, Claudio Imbrenda wrote:
>> These are some utilty functions that will be used later on for storage
>> attributes migration.
>>
>> Signed-off-by: Claudio Imbrenda <imbrenda@xxxxxxxxxxxxxxxxxx>
>> ---
>>  arch/s390/kvm/kvm-s390.c | 40 ++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 40 insertions(+)
>>
>> diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
>> index 6f17031..100ea15 100644
>> --- a/arch/s390/kvm/kvm-s390.c
>> +++ b/arch/s390/kvm/kvm-s390.c
>> @@ -764,6 +764,14 @@ static void kvm_s390_sync_request_broadcast(struct kvm *kvm, int req)
>>  		kvm_s390_sync_request(req, vcpu);
>>  }
>>  
>> +static inline unsigned long *_cmma_bitmap(struct kvm_memory_slot *ms)
> 
> I think you can get rid of the "_" here. And ususally we use two _ ?
> 
>> +{
>> +	unsigned long long len;
>> +
>> +	len = kvm_dirty_bitmap_bytes(ms) / sizeof(*ms->dirty_bitmap);
> 
> return (void *) ms->dirty_bitmap + kvm_dirty_bitmap_bytes(ms);
> 
> ?

Relying on pointer arithmetics for (void *) being 1 is a gcc extension.
If possible I would like to avoid that. 

> 
>> +	return ms->dirty_bitmap + len;
>> +}
> 
> 
>> +
>>  /*
>>   * Must be called with kvm->srcu held to avoid races on memslots, and with
>>   * kvm->lock to avoid races with ourselves and kvm_s390_vm_stop_migration.
>> @@ -1512,6 +1520,38 @@ static long kvm_s390_set_skeys(struct kvm *kvm, struct kvm_s390_skeys *args)
>>  #define KVM_S390_CMMA_SIZE_MAX ((u32)KVM_S390_SKEYS_MAX)
>>  
>>  /*
>> + * Similar to gfn_to_memslot, but returns a memslot also when the address falls
>> + * in a hole. In that case a memslot near the hole is returned.
>> + */
>> +static int gfn_to_memslot_approx(struct kvm *kvm, gfn_t gfn)
>> +{
>> +	struct kvm_memslots *slots = kvm_memslots(kvm);
>> +	int start = 0, end = slots->used_slots;
>> +	int slot = atomic_read(&slots->lru_slot);
>> +	struct kvm_memory_slot *memslots = slots->memslots;
>> +
>> +	if (gfn >= memslots[slot].base_gfn &&
>> +	    gfn < memslots[slot].base_gfn + memslots[slot].npages)
>> +		return slot;
>> +
>> +	while (start < end) {
>> +		slot = start + (end - start) / 2;
>> +
>> +		if (gfn >= memslots[slot].base_gfn)
>> +			end = slot;
>> +		else
>> +			start = slot + 1;
>> +	}
>> +
>> +	if (gfn >= memslots[start].base_gfn &&
>> +	    gfn < memslots[start].base_gfn + memslots[start].npages) {
>> +		atomic_set(&slots->lru_slot, start);
>> +	}
>> +
>> +	return start;
>> +}
> 
> This looks ugly, hope we can avoid this ....
> 
>> +
>> +/*
>>   * This function searches for the next page with dirty CMMA attributes, and
>>   * saves the attributes in the buffer up to either the end of the buffer or
>>   * until a block of at least KVM_S390_MAX_BIT_DISTANCE clean bits is found;
>>
> 
> 




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux