Re: Why guest physical addresses are not the same as the corresponding host virtual addresses in QEMU/KVM? Thanks!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Maxim,

Thanks much for your reply.

On Sun, Oct 11, 2020 at 3:29 AM Maxim Levitsky <mlevitsk@xxxxxxxxxx> wrote:
>
> On Sun, 2020-10-11 at 01:26 -0400, harry harry wrote:
> > Hi QEMU/KVM developers,
> >
> > I am sorry if my email disturbs you. I did an experiment and found the
> > guest physical addresses (GPAs) are not the same as the corresponding
> > host virtual addresses (HVAs). I am curious about why; I think they
> > should be the same. I am very appreciated if you can give some
> > comments and suggestions about 1) why GPAs and HVAs are not the same
> > in the following experiment; 2) are there any better experiments to
> > look into the reasons? Any other comments/suggestions are also very
> > welcome. Thanks!
> >
> > The experiment is like this: in a single vCPU VM, I ran a program
> > allocating and referencing lots of pages (e.g., 100*1024) and didn't
> > let the program terminate. Then, I checked the program's guest virtual
> > addresses (GVAs) and GPAs through parsing its pagemap and maps files
> > located at /proc/pid/pagemap and /proc/pid/maps, respectively. At
> > last, in the host OS, I checked the vCPU's pagemap and maps files to
> > find the program's HVAs and host physical addresses (HPAs); I actually
> > checked the new allocated physical pages in the host OS after the
> > program was executed in the guest OS.
> >
> > With the above experiment, I found GPAs of the program are different
> > from its corresponding HVAs. BTW, Intel EPT and other related Intel
> > virtualization techniques were enabled.
> >
> > Thanks,
> > Harry
> >
> The fundemental reason is that some HVAs (e.g. QEMU's virtual memory addresses) are already allocated
> for qemu's own use (e.g qemu code/heap/etc) prior to the guest starting up.
>
> KVM does though use quite effiecient way of mapping HVA's to GPA. It uses an array of arbitrary sized HVA areas
> (which we call memslots) and for each such area/memslot you specify the GPA to map to. In theory QEMU
> could allocate the whole guest's memory in one contiguous area and map it as single memslot to the guest.
> In practice there are MMIO holes, and various other reasons why there will be more that 1 memslot.

It is still not clear to me why GPAs are not the same as the
corresponding HVAs in my experiment. Since two-dimensional paging
(Intel EPT) is used, GPAs should be the same as their corresponding
HVAs. Otherwise, I think EPT may not work correctly. What do you
think?

Thanks,
Harry




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux