Re: [PATCH v4 09/36] KVM: s390: protvirt: Add initial vm and cpu lifecycle handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>
>> The question will repeat a couple of times in the patch: Do we want to
>> convert that to a proper error (e.g., EBUSY, EINVAL, EWHATSOEVER)
>> instead of returning "1" to user space (whoch looks weird).
> 
> Not sure about the right error code. 
> -EIO for cc == 1?

Makes sense.

[...]

>>> +	if (!cc)
>>> +		free_pages(vcpu->arch.pv.stor_base,
>>> +			   get_order(uv_info.guest_cpu_stor_len));
>>
>> Should we clear arch.pv.handle?
> 
> this is done in the memset below

missed that, grepping betrayed me.

>>
>> Also, I do wonder if it makes sense to
>>
>> vcpu->arch.pv.stor_base = NULL;
> 
> same. We could do 4 single assignments instead, but the memset is probably ok?

yes, it's only harder to review ;)

[...]

>>> +	mutex_lock(&kvm->slots_lock);
>>> +	memslot = kvm_memslots(kvm)->memslots;
>>> +	npages = memslot->base_gfn + memslot->npages;
>>
>> I remember I asked this question already, maybe I missed the reply :(
>>
>> 1. What if we have multiple slots?
> 
> memslot 0 is the last one, so this should actually have the last memory address
> so this should be ok.

I think I got it wrong (thought there would be some start and length -
but it's only a length, which makes sense).

> 
>> 2. What is expected to happen if new slots are added (e.g., memory
>> hotplug in the future?)
>>
>> Shouldn't you bail out if there is more than one slot and make sure that
>> no new ones can be added as long as pv is active (I remember the latter
>> should be very easy from an arch callback)?
> 
> Yes, that should be easy, something like the following I guess
> 
> --- a/arch/s390/kvm/kvm-s390.c
> +++ b/arch/s390/kvm/kvm-s390.c
> @@ -4744,6 +4744,9 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
>         if (mem->guest_phys_addr + mem->memory_size > kvm->arch.mem_limit)
>                 return -EINVAL;
>  
> +       /* When we are protected we should not change the memory slots */
> +       if (kvm_s390_pv_is_protected(kvm))
> +               return -EINVAL;
>         return 0;
>  }
>  
> 
> 
> 
> I think we can extend that later to actually use
> the memorysize from kvm->arch.mem_limit as long as this is reasonably small.
> This should then be done when we implement memory hotplug.

IMHO mem_limit would make a lot of sense and even make hotplug work out
of the box. I assume you can assume that user space properly sets this
up for all PV guests (KVM_S390_VM_MEM_LIMIT_SIZE).

So maybe use that directly and bail out if it's too big? (esp. if it's
KVM_S390_NO_MEM_LIMIT).

[...]

>> Similar to the VCPU path, should be set all pointers to NULL but skip
>> the freeing? With a similar comment /* Inteded memory leak ... */
> 
> This is done in kvm_s390_pv_dealloc_vm. And I think it makes sense to keep
> the VM thing linked to the KVM struct. This will prevent the user from doing
> another PV_ENABLE on this guest.


Makes sense.


-- 
Thanks,

David / dhildenb





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux