Re: [PATCH 1/3] KVM: PPC: Implement kvmppc_copy_guest() to perform in place copy of guest memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2019-03-19 at 17:53 +1100, Alexey Kardashevskiy wrote:
> 
> On 19/03/2019 15:04, Suraj Jitindar Singh wrote:
> > Implement the function kvmppc_copy_guest() to be used to perform a
> > memory
> > copy from one guest physical address to another of a variable
> > length.
> > 
> > This performs similar functionality as the kvm_read_guest() and
> > kvm_write_guest() functions, except both addresses point to guest
> > memory.
> > This performs a copy in place using raw_copy_in_user() to avoid
> > having to
> > buffer the data.
> > 
> > The guest memory can reside in different memslots and the copy
> > length
> > can span memslots.
> > 
> > Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@xxxxxxxxx>
> > ---
> >  arch/powerpc/kvm/book3s_hv.c | 69
> > ++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 69 insertions(+)
> > 
> > diff --git a/arch/powerpc/kvm/book3s_hv.c
> > b/arch/powerpc/kvm/book3s_hv.c
> > index ec38576dc685..7179ea783f4f 100644
> > --- a/arch/powerpc/kvm/book3s_hv.c
> > +++ b/arch/powerpc/kvm/book3s_hv.c
> > @@ -814,6 +814,75 @@ static int kvmppc_h_set_mode(struct kvm_vcpu
> > *vcpu, unsigned long mflags,
> >  	}
> >  }
> >  
> > +static int __kvmppc_copy_guest_page(struct kvm_memory_slot
> > *to_memslot,
> > +				    gfn_t to_gfn, int to_offset,
> > +				    struct kvm_memory_slot
> > *from_memslot,
> > +				    gfn_t from_gfn, int
> > from_offset, int len)
> > +{
> > +	int r;
> > +	unsigned long to_addr, from_addr;
> > +
> > +	to_addr = gfn_to_hva_memslot(to_memslot, to_gfn);
> > +	if (kvm_is_error_hva(to_addr))
> > +		return -EFAULT;
> > +	from_addr = gfn_to_hva_memslot(from_memslot, from_gfn);
> > +	if (kvm_is_error_hva(from_addr))
> > +		return -EFAULT;
> > +	r = raw_copy_in_user((void __user *)to_addr + to_offset,
> > +			     (void __user *)from_addr +
> > from_offset, len);
> > +	if (r)
> > +		return -EFAULT;
> > +	return 0;
> > +}
> > +
> > +static int next_segment_many(unsigned long len, int offset1, int
> > offset2)
> 
> 
> What is the "_many" suffix about?

It made sense in the context of virt/kvm/kvm_main.c, maybe less so now
I moved it to PPC code.

> 
> 
> > +{
> > +	int size = min(PAGE_SIZE - offset1, PAGE_SIZE - offset2);
> 
> Nitpicking :) Here it is min()...
> 
> > +
> > +	if (len > size)
> > +		return size;
> > +	else
> > +		return len;
> 
> ...but here it is if() when it could also be min() (or may be
> min_t()).

Very true

> 
> 
> > +}
> > +
> > +static int kvmppc_copy_guest(struct kvm *kvm, gpa_t to, gpa_t
> > from,
> > +			     unsigned long len)
> 
> 
> This does not compile (most comments are made just because I had to
> reply anyway):
> 
> /home/aik/p/kernel/arch/powerpc/kvm/book3s_hv.c:835:12: error:
> ‘kvmppc_copy_guest’ defined but not used [-Werror=unused-function]
>  static int kvmppc_copy_guest(struct kvm *kvm, gpa_t to, gpa_t from,
>             ^~~~~~~~~~~~~~~~~
> 
> imho 1/3 and 2/3 should be one patch.
> 
> A general comment is that the H_PAGE_INIT hcall description says "The
> logical addresses ... must both point to the start of a 4 K system
> memory page" so the loop will never have to execute more than once,
> cannot span memslots  => it could be lot simpler then, or I missed
> something here (unlikely, after reading 3/3).

Yeah I think I'll roll 1/3 and 2/3 into one and only handle 4K pages
for now.

> 
> 
> > +{
> > +	struct kvm_memory_slot *to_memslot = NULL;
> > +	struct kvm_memory_slot *from_memslot = NULL;
> > +	gfn_t to_gfn = to >> PAGE_SHIFT;
> > +	gfn_t from_gfn = from >> PAGE_SHIFT;
> > +	int seg;
> > +	int to_offset = offset_in_page(to);
> > +	int from_offset = offset_in_page(from);
> > +	int ret;
> > +
> > +	while ((seg = next_segment_many(len, to_offset,
> > from_offset)) != 0) {
> > +		if (!to_memslot || (to_gfn >= (to_memslot-
> > >base_gfn +
> > +					       to_memslot-
> > >npages)))
> > +			to_memslot = gfn_to_memslot(kvm, to_gfn);
> > +		if (!from_memslot || (from_gfn >= (from_memslot-
> > >base_gfn +
> > +						   from_memslot-
> > >npages)))
> > +			from_memslot = gfn_to_memslot(kvm,
> > from_gfn);
> > +
> > +		ret = __kvmppc_copy_guest_page(to_memslot, to_gfn,
> > to_offset,
> > +					       from_memslot,
> > from_gfn,
> > +					       from_offset, seg);
> > +		if (ret < 0)
> > +			return ret;
> > +		mark_page_dirty(kvm, to_gfn);
> 
> 
> Nit: if you made mark_page_dirty_in_slot() public (yeah it is in the
> common code), you could save here one search through memslots.

Yeah, given we store the most recent memslot and check it first, I
think the overhead is negligible. You are correct though.

> 
> 
> > +
> > +		to_offset = (to_offset + seg) & (PAGE_SIZE - 1);
> 
> 
> s/(PAGE_SIZE - 1)/~PAGE_MASK/ ? Or even use again that
> offset_in_page()
> as you did above?
> 
> 
> > +		from_offset = (from_offset + seg) & (PAGE_SIZE -
> > 1);
> > +		len -= seg;
> > +		if (!to_offset)
> > +			to_gfn += 1;
> > +		if (!from_offset)
> > +			from_gfn += 1;
> > +	}
> > +	return 0;
> > +}
> > +
> >  static int kvm_arch_vcpu_yield_to(struct kvm_vcpu *target)
> >  {
> >  	struct kvmppc_vcore *vcore = target->arch.vcore;
> > 
> 
> 



[Index of Archives]     [KVM Development]     [KVM ARM]     [KVM ia64]     [Linux Virtualization]     [Linux USB Devel]     [Linux Video]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Big List of Linux Books]

  Powered by Linux