On 24/10/2024 10:54, Nikita Kalyazin wrote:
[2] proposes an alternative to
UserfaultFD for intercepting stage-2 faults, while this series
conceptually compliments it with the ability to populate guest memory
backed by guest_memfd for `KVM_X86_SW_PROTECTED_VM` VMs.
+David
+Sean
+mm
While measuring memory population performance of guest_memfd using this
series, I noticed that guest_memfd population takes longer than my
baseline, which is filling anonymous private memory via UFFDIO_COPY.
I am using x86_64 for my measurements and 3 GiB memory region:
- anon/private UFFDIO_COPY: 940 ms
- guest_memfd: 1371 ms (+46%)
It turns out that the effect is observable not only for guest_memfd, but
also for any type of shared memory, eg memfd or anonymous memory mapped
as shared.
Below are measurements of a plain mmap(MAP_POPULATE) operation:
mmap(NULL, 3ll * (1 << 30), PROT_READ | PROT_WRITE, MAP_PRIVATE |
MAP_ANONYMOUS | MAP_POPULATE, -1, 0);
vs
mmap(NULL, 3ll * (1 << 30), PROT_READ | PROT_WRITE, MAP_SHARED |
MAP_ANONYMOUS | MAP_POPULATE, -1, 0);
Results:
- MAP_PRIVATE: 968 ms
- MAP_SHARED: 1646 ms
I am seeing this effect on a range of kernels. The oldest I used was
5.10, the newest is the current kvm-next (for-linus-2590-gd96c77bd4eeb).
When profiling with perf, I observe the following hottest operations
(kvm-next). Attaching full distributions at the end of the email.
MAP_PRIVATE:
- 19.72% clear_page_erms, rep stos %al,%es:(%rdi)
MAP_SHARED:
- 43.94% shmem_get_folio_gfp, lock orb $0x8,(%rdi), which is atomic
setting of the PG_uptodate bit
- 10.98% clear_page_erms, rep stos %al,%es:(%rdi)
Note that MAP_PRIVATE/do_anonymous_page calls __folio_mark_uptodate that
sets the PG_uptodate bit regularly.
, while MAP_SHARED/shmem_get_folio_gfp calls folio_mark_uptodate that
sets the PG_uptodate bit atomically.
While this logic is intuitive, its performance effect is more
significant that I would expect.
The questions are:
- Is this a well-known behaviour?
- Is there a way to mitigate that, ie make shared memory (including
guest_memfd) population faster/comparable to private memory?
Nikita
Appendix: full call tree obtained via perf
MAP_RPIVATE:
- 87.97% __mmap
entry_SYSCALL_64_after_hwframe
do_syscall_64
vm_mmap_pgoff
__mm_populate
populate_vma_page_range
- __get_user_pages
- 77.94% handle_mm_fault
- 76.90% __handle_mm_fault
- 72.70% do_anonymous_page
- 31.92% vma_alloc_folio_noprof
- 30.74% alloc_pages_mpol_noprof
- 29.60% __alloc_pages_noprof
- 28.40% get_page_from_freelist
19.72% clear_page_erms
- 3.00% __rmqueue_pcplist
__mod_zone_page_state
1.18% _raw_spin_trylock
- 20.03% __pte_offset_map_lock
- 15.96% _raw_spin_lock
1.50% preempt_count_add
- 2.27% __pte_offset_map
__rcu_read_lock
- 7.22% __folio_batch_add_and_move
- 4.68% folio_batch_move_lru
- 3.77% lru_add
+ 0.95% __mod_zone_page_state
0.86% __mod_node_page_state
0.84% folios_put_refs
0.55% check_preemption_disabled
- 2.85% folio_add_new_anon_rmap
- __folio_mod_stat
__mod_node_page_state
- 1.15% pte_offset_map_nolock
__pte_offset_map
- 7.59% follow_page_pte
- 4.56% __pte_offset_map_lock
- 2.27% _raw_spin_lock
preempt_count_add
1.13% __pte_offset_map
0.75% folio_mark_accessed
MAP_SHARED:
- 77.89% __mmap
entry_SYSCALL_64_after_hwframe
do_syscall_64
vm_mmap_pgoff
__mm_populate
populate_vma_page_range
- __get_user_pages
- 72.11% handle_mm_fault
- 71.67% __handle_mm_fault
- 69.62% do_fault
- 44.61% __do_fault
- shmem_fault
- 43.94% shmem_get_folio_gfp
- 17.20%
shmem_alloc_and_add_folio.constprop.0
- 5.10% shmem_alloc_folio
- 4.58% folio_alloc_mpol_noprof
- alloc_pages_mpol_noprof
- 4.00% __alloc_pages_noprof
- 3.31% get_page_from_freelist
1.24% __rmqueue_pcplist
- 5.07% shmem_add_to_page_cache
- 1.44% __mod_node_page_state
0.61% check_preemption_disabled
0.78% xas_store
0.74% xas_find_conflict
0.66% _raw_spin_lock_irq
- 3.96% __folio_batch_add_and_move
- 2.41% folio_batch_move_lru
1.88% lru_add
- 1.56% shmem_inode_acct_blocks
- 1.24% __dquot_alloc_space
- 0.77% inode_add_bytes
_raw_spin_lock
- 0.77% shmem_recalc_inode
_raw_spin_lock
10.98% clear_page_erms
- 1.17% filemap_get_entry
0.78% xas_load
- 20.26% filemap_map_pages
- 12.23% next_uptodate_folio
- 1.27% xas_find
xas_load
- 1.16% __pte_offset_map_lock
0.59% _raw_spin_lock
- 3.48% finish_fault
- 1.28% set_pte_range
0.96% folio_add_file_rmap_ptes
- 0.91% __pte_offset_map_lock
0.54% _raw_spin_lock
0.57% pte_offset_map_nolock
- 4.11% follow_page_pte
- 2.36% __pte_offset_map_lock
- 1.32% _raw_spin_lock
preempt_count_add
0.54% __pte_offset_map