Re: [RFCv2 21/37] KVM: S390: protvirt: Introduce instruction data area bounce buffer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 05.02.20 12:43, David Hildenbrand wrote:
>>  #ifdef CONFIG_KVM_S390_PROTECTED_VIRTUALIZATION_HOST
>> +static long kvm_s390_guest_sida_op(struct kvm_vcpu *vcpu,
>> +				   struct kvm_s390_mem_op *mop)
>> +{
>> +	int r = 0;
>> +	void __user *uaddr = (void __user *)mop->buf;
> 
> Reverse christmas tree :)

ack.
> 
>> +
>> +	if (mop->flags || !mop->size)
>> +		return -EINVAL;
>> +
>> +	if (mop->size > sida_size(vcpu->arch.sie_block))
>> +		return -E2BIG;
> 
> Should be caught by the check below as well (or is this an implicit
> overflow check? - see below).
> 
>> +
>> +	if (mop->size + mop->offset > sida_size(vcpu->arch.sie_block))
>> +		return -E2BIG;
>> +
> 
> Do we have to care about overflows? (at least the offset is 32-bit,
> didn't check the size :))

size and offset are both unsigned. So offset 0xfffffff0 + size 0x100 would not
be covered by both checks. 

Hmm, should we add this as well?

@@ -4538,6 +4538,9 @@ static long kvm_s390_guest_sida_op(struct kvm_vcpu *vcpu,
        if (mop->size > sida_size(vcpu->arch.sie_block))
                return -E2BIG;
 
+       if (mop->offset > sida_size(vcpu->arch.sie_block))
+               return -E2BIG;
+
        if (mop->size + mop->offset > sida_size(vcpu->arch.sie_block))
                return -E2BIG;
 


> 
> 
>> +	switch (mop->op) {
>> +	case KVM_S390_MEMOP_SIDA_READ:
>> +		r = 0;
>> +		if (copy_to_user(uaddr, (void *)sida_origin(vcpu->arch.sie_block) +
>> +				 mop->offset, mop->size))
>> +			r = -EFAULT;
>> +
>> +		break;
>> +	case KVM_S390_MEMOP_SIDA_WRITE:
>> +		r = 0;
>> +		if (copy_from_user((void *)vcpu->arch.sie_block->sidad +
>> +				   mop->offset, uaddr, mop->size))
>> +			r = -EFAULT;
>> +		break;
>> +	}
>> +	return r;
>> +}
>> +
>>  static int kvm_s390_handle_pv_vcpu(struct kvm_vcpu *vcpu,
>>  				   struct kvm_pv_cmd *cmd)
>>  {
>> @@ -4708,6 +4743,20 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
>>  		r = kvm_s390_handle_pv_vcpu(vcpu, &args);
>>  		break;
>>  	}
>> +	case KVM_S390_SIDA_OP: {
>> +		struct kvm_s390_mem_op mem_op;
>> +
>> +		if (!kvm_s390_pv_is_protected(vcpu->kvm)) {
>> +			r = -EINVAL;
>> +			break;
>> +		}
> 
> Could we race against a VM_DESTROY? Should we protect somehow?

As far as I can tell SIDA_OP and the VCPU_DESTROY are both per cpu ioctls
and thus protected by the vcpu->mutex. (please double check).

> [...]
> 
>> -/* for KVM_S390_MEM_OP */
>> +/* for KVM_S390_MEM_OP and KVM_S390_SIDA_OP */
>>  struct kvm_s390_mem_op {
>>  	/* in */
>>  	__u64 gaddr;		/* the guest address */
>> @@ -475,11 +475,17 @@ struct kvm_s390_mem_op {
>>  	__u32 op;		/* type of operation */
>>  	__u64 buf;		/* buffer in userspace */
>>  	__u8 ar;		/* the access register number */
>> -	__u8 reserved[31];	/* should be set to 0 */
>> +	__u8 reserved21[3];	/* should be set to 0 */
>> +	__u32 offset;		/* offset into the sida */
> 
> maybe "side_offset"? or define a union, overlying the ar (because that
> obviously doesn't apply to this memop). So eventually different layout
> for different memop.

Will use sida_offset for now, but I have to look into Thomas proposal.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux