On Thu, Mar 07, 2019 at 10:18:05AM +1100, Suraj Jitindar Singh wrote: > On Wed, 2019-03-06 at 09:21 -0800, Sean Christopherson wrote: > > On Wed, Mar 06, 2019 at 05:00:15PM +1100, Suraj Jitindar Singh wrote: > > > Implement the function kvm_copy_guest() to be used to perform a > > > memory > > > copy from one guest physical address to another of a variable > > > length. > > > > > > This performs similar functionality as the kvm_read_guest() and > > > kvm_write_guest() functions, except both addresses point to guest > > > memory. > > > This performs a copy in place using raw_copy_in_user() to avoid > > > having to > > > buffer the data. > > > > > > The guest memory can reside in different memslots and the copy > > > length > > > can span memslots. > > > > > > Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@xxxxxxxxx> > > > > > > --- > > > > > > I suspect additional checking may be required around the > > > raw_copy_in_user() > > > call. > > > > > > --- > > > include/linux/kvm_host.h | 1 + > > > virt/kvm/kvm_main.c | 69 > > > ++++++++++++++++++++++++++++++++++++++++++++++++ > > > 2 files changed, 70 insertions(+) > > > > > > > ... > > > > > +static int next_segment_many(unsigned long len, int offset1, int > > > offset2) > > > +{ > > > + int size = min(PAGE_SIZE - offset1, PAGE_SIZE - offset2); > > > + > > > + if (len > size) > > > + return size; > > > + else > > > + return len; > > > +} > > > + > > > +int kvm_copy_guest(struct kvm *kvm, gpa_t to, gpa_t from, unsigned > > > long len) > > > +{ > > > + struct kvm_memory_slot *to_memslot = NULL; > > > + struct kvm_memory_slot *from_memslot = NULL; > > > + gfn_t to_gfn = to >> PAGE_SHIFT; > > > + gfn_t from_gfn = from >> PAGE_SHIFT; > > > + int seg; > > > + int to_offset = offset_in_page(to); > > > + int from_offset = offset_in_page(from); > > > + int ret; > > > + > > > + while ((seg = next_segment_many(len, to_offset, > > > from_offset)) != 0) { > > > + if (!to_memslot || (to_gfn >= (to_memslot- > > > >base_gfn + > > > + to_memslot- > > > >npages))) > > > + to_memslot = gfn_to_memslot(kvm, to_gfn); > > > + if (!from_memslot || (from_gfn >= (from_memslot- > > > >base_gfn + > > > + from_memslot- > > > >npages))) > > > + from_memslot = gfn_to_memslot(kvm, > > > from_gfn); > > > + > > > + ret = __kvm_copy_guest_page(to_memslot, to_gfn, > > > to_offset, > > > + from_memslot, > > > from_gfn, from_offset, > > > + seg); > > > + if (ret < 0) > > > + return ret; > > > + > > > + to_offset = (to_offset + seg) & (PAGE_SIZE - 1); > > > + from_offset = (from_offset + seg) & (PAGE_SIZE - > > > 1); > > > + len -= seg; > > > + if (!to_offset) > > > + to_gfn += 1; > > > + if (!from_offset) > > > + from_gfn += 1; > > > + } > > > + return 0; > > > +} > > > +EXPORT_SYMBOL_GPL(kvm_copy_guest); > > > > Is there a need to support spanning multiple pages at this > > time? Your use > > case always accesses exactly a page and requires both dst and src to > > be > > page aligned. I.e. provide just kvm_copy_guest_page() for > > simplicity. > > There is no need for multiple pages at this stage. However I wanted to > match kvm_[read/write]_guest which allow multiple pages and there > didn't seem any reason to not just implement it this way from the > start. >From a correctness standpoint, the multi-page version is quite difficult to comprehend and review, e.g. no matter how long I stare at this code I doubt I'll ever be 100% certain that it correctly handles every corner case (not saying it doesn't :-) ).