Re: [RFC PATCH 0/4] KVM: ioctl for populating guest_memfd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20.11.24 13:09, Nikita Kalyazin wrote:
On 24/10/2024 10:54, Nikita Kalyazin wrote:
[2] proposes an alternative to
UserfaultFD for intercepting stage-2 faults, while this series
conceptually compliments it with the ability to populate guest memory
backed by guest_memfd for `KVM_X86_SW_PROTECTED_VM` VMs.

+David
+Sean
+mm

Hi!


While measuring memory population performance of guest_memfd using this
series, I noticed that guest_memfd population takes longer than my
baseline, which is filling anonymous private memory via UFFDIO_COPY.

I am using x86_64 for my measurements and 3 GiB memory region:
   - anon/private UFFDIO_COPY:  940 ms
   - guest_memfd:              1371 ms (+46%)

It turns out that the effect is observable not only for guest_memfd, but
also for any type of shared memory, eg memfd or anonymous memory mapped
as shared.
> Below are measurements of a plain mmap(MAP_POPULATE) operation:>
mmap(NULL, 3ll * (1 << 30), PROT_READ | PROT_WRITE, MAP_PRIVATE |
MAP_ANONYMOUS | MAP_POPULATE, -1, 0);
   vs
mmap(NULL, 3ll * (1 << 30), PROT_READ | PROT_WRITE, MAP_SHARED |
MAP_ANONYMOUS | MAP_POPULATE, -1, 0);

Results:
   - MAP_PRIVATE: 968 ms
   - MAP_SHARED: 1646 ms

At least here it is expected to some degree: as soon as the page cache is involved map/unmap gets slower, because we are effectively maintaining two datastructures (page tables + page cache) instead of only a single one (page cache)

Can you make sure that THP/large folios don't interfere in your experiments (e.g., madvise(MADV_NOHUGEPAGE))?


I am seeing this effect on a range of kernels. The oldest I used was
5.10, the newest is the current kvm-next (for-linus-2590-gd96c77bd4eeb).

When profiling with perf, I observe the following hottest operations
(kvm-next). Attaching full distributions at the end of the email.

MAP_PRIVATE:
- 19.72% clear_page_erms, rep stos %al,%es:(%rdi)

MAP_SHARED:
- 43.94% shmem_get_folio_gfp, lock orb $0x8,(%rdi), which is atomic
setting of the PG_uptodate bit
- 10.98% clear_page_erms, rep stos %al,%es:(%rdi)

Interesting.

Note that MAP_PRIVATE/do_anonymous_page calls __folio_mark_uptodate that
sets the PG_uptodate bit regularly.
, while MAP_SHARED/shmem_get_folio_gfp calls folio_mark_uptodate that
sets the PG_uptodate bit atomically.

While this logic is intuitive, its performance effect is more
significant that I would expect.

Yes. How much of the performance difference would remain if you hack out the atomic op just to play with it? I suspect there will still be some difference.


The questions are:
   - Is this a well-known behaviour?
   - Is there a way to mitigate that, ie make shared memory (including
guest_memfd) population faster/comparable to private memory?

Likely. But your experiment measures above something different than what guest_memfd vs. anon does: guest_memfd doesn't update page tables, so I would assume guest_memfd will be faster than MAP_POPULATE.

How do you end up allocating memory for guest_memfd? Using simple fallocate()?

Note that we might improve allocation times with guest_memfd when allocating larger folios.

--
Cheers,

David / dhildenb





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux