On Sun, 2020-10-11 at 01:26 -0400, harry harry wrote: > Hi QEMU/KVM developers, > > I am sorry if my email disturbs you. I did an experiment and found the > guest physical addresses (GPAs) are not the same as the corresponding > host virtual addresses (HVAs). I am curious about why; I think they > should be the same. I am very appreciated if you can give some > comments and suggestions about 1) why GPAs and HVAs are not the same > in the following experiment; 2) are there any better experiments to > look into the reasons? Any other comments/suggestions are also very > welcome. Thanks! > > The experiment is like this: in a single vCPU VM, I ran a program > allocating and referencing lots of pages (e.g., 100*1024) and didn't > let the program terminate. Then, I checked the program's guest virtual > addresses (GVAs) and GPAs through parsing its pagemap and maps files > located at /proc/pid/pagemap and /proc/pid/maps, respectively. At > last, in the host OS, I checked the vCPU's pagemap and maps files to > find the program's HVAs and host physical addresses (HPAs); I actually > checked the new allocated physical pages in the host OS after the > program was executed in the guest OS. > > With the above experiment, I found GPAs of the program are different > from its corresponding HVAs. BTW, Intel EPT and other related Intel > virtualization techniques were enabled. > > Thanks, > Harry > The fundemental reason is that some HVAs (e.g. QEMU's virtual memory addresses) are already allocated for qemu's own use (e.g qemu code/heap/etc) prior to the guest starting up. KVM does though use quite effiecient way of mapping HVA's to GPA. It uses an array of arbitrary sized HVA areas (which we call memslots) and for each such area/memslot you specify the GPA to map to. In theory QEMU could allocate the whole guest's memory in one contiguous area and map it as single memslot to the guest. In practice there are MMIO holes, and various other reasons why there will be more that 1 memslot. Best regards, Maxim Levitsky