Re: [RFCv2 08/37] KVM: s390: protvirt: Add initial lifecycle handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/4/20 1:34 PM, Christian Borntraeger wrote:
> 
> 
> On 04.02.20 13:13, David Hildenbrand wrote:
>> [...]
>>
>>> +#ifdef CONFIG_KVM_S390_PROTECTED_VIRTUALIZATION_HOST
>>> +static int kvm_s390_handle_pv(struct kvm *kvm, struct kvm_pv_cmd *cmd)
>>> +{
>>> +	int r = 0;
>>> +	void __user *argp = (void __user *)cmd->data;
>>> +
>>> +	switch (cmd->cmd) {
>>> +	case KVM_PV_VM_CREATE: {
>>> +		r = -EINVAL;
>>> +		if (kvm_s390_pv_is_protected(kvm))
>>> +			break;
>>> +
>>> +		r = kvm_s390_pv_alloc_vm(kvm);
>>> +		if (r)
>>> +			break;
>>> +
>>> +		mutex_lock(&kvm->lock);
>>> +		kvm_s390_vcpu_block_all(kvm);
>>> +		/* FMT 4 SIE needs esca */
>>> +		r = sca_switch_to_extended(kvm);
>>> +		if (!r)
>>> +			r = kvm_s390_pv_create_vm(kvm);
>>> +		kvm_s390_vcpu_unblock_all(kvm);
>>> +		mutex_unlock(&kvm->lock);
>>> +		break;
>>> +	}
>>
>> I think KVM_PV_VM_ENABLE/KVM_PV_VM_DISABLE would be a better fit. You're
>> not creating/deleting VMs, aren't you? All you're doing is allocating
>> some data and performing some kind of a mode switch.
> 
> I kind of like the idea. Need to talk to Janosch about this. 
>> [...]
>>
>>>  	VM_EVENT(kvm, 3, "create cpu %d at 0x%pK, sie block at 0x%pK", id, vcpu,
>>>  		 vcpu->arch.sie_block);
>>>  	trace_kvm_s390_create_vcpu(id, vcpu, vcpu->arch.sie_block);
>>> @@ -4353,6 +4502,37 @@ long kvm_arch_vcpu_async_ioctl(struct file *filp,
>>>  	return -ENOIOCTLCMD;
>>>  }
>>>  
>>> +#ifdef CONFIG_KVM_S390_PROTECTED_VIRTUALIZATION_HOST
>>> +static int kvm_s390_handle_pv_vcpu(struct kvm_vcpu *vcpu,
>>> +				   struct kvm_pv_cmd *cmd)
>>> +{
>>> +	int r = 0;
>>> +
>>> +	if (!kvm_s390_pv_is_protected(vcpu->kvm))
>>> +		return -EINVAL;
>>> +
>>> +	switch (cmd->cmd) {
>>> +	case KVM_PV_VCPU_CREATE: {
>>> +		if (kvm_s390_pv_handle_cpu(vcpu))
>>> +			return -EINVAL;
>>> +
>>> +		r = kvm_s390_pv_create_cpu(vcpu);
>>> +		break;
>>> +	}
>>> +	case KVM_PV_VCPU_DESTROY: {
>>> +		if (!kvm_s390_pv_handle_cpu(vcpu))
>>> +			return -EINVAL;
>>> +
>>> +		r = kvm_s390_pv_destroy_cpu(vcpu);
>>> +		break;
>>> +	}
>>> +	default:
>>> +		r = -ENOTTY;
>>> +	}
>>> +	return r;
>>> +}
>>
>> I asked this already and didn't get an answer (lost in the flood of
>> comments :) )
>>
>> Can't we simply convert all VCPUs via KVM_PV_VM_CREATE and destoy them
>> via KVM_PV_VM_DESTROY? Then you can easily handle hotplug as well in the
>> kernel when a new VCPU is created and PV is active - oh and I see you
>> are already doing that in kvm_arch_vcpu_create(). So that screams for
>> doing either this a) completely triggered by user space or b) completely
>> in the kernel. I prefer the latter. One interface less required.
>>
>> I would assume that no VCPU is allowed to be running inside KVM while
> 
> right.
> 
>> performing the PV switch, which would make this even easier.
> 
> Same as above. I like the idea. Will need to talk to Janosch. 

My initial code had a global VM switch and I quickly ran into the
problems of the complexity that such a solution poses.

The benefits of splitting were:
	* Less time spend in kernel for one single ioctl
	* Error handling can be split up into VM and VCPU (my biggest problem
initially)
	* Error reporting is more granular (i.e. ioctl returns error for VCPU
or VM)
	* It's in line with KVM's concept of VCPUs and VMs where we create the
VM and then each VCPU



Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux