On 03.11.22 07:01, alexlzhu@xxxxxx wrote:
From: Alexander Zhu <alexlzhu@xxxxxx>
Changelog:
v5 to v6
-removed PageSwapCache check from add_underutilized_thp as split_huge_page takes care of this already.
-added check for PageHuge in add_underutilized_thp to account for hugetlbfs pages.
-added Yu Zhao as author for the second patch
v4 to v5
-split out split_huge_page changes into three different patches. One for zapping zero pages, one for not remapping zero pages, and one for self tests.
-fixed bug with lru_to_folio, was corrupting the folio
-fixed bug with memchr_inv in mm/thp_utilization. zero page should mean !memchr_inv(kaddr, 0, PAGE_SIZE)
v3 to v4
-changed thp_utilization_bucket() function to take folios, saves conversion between page and folio
-added newlines where they were previously missing in v2-v3
-moved the thp utilization code out into its own file under mm/thp_utilization.c
-removed is_anonymous_transparent_hugepage function. Use folio_test_anon and folio_test_trans_huge instead.
-changed thp_number_utilized_pages to use memchr_inv
-added some comments regardling trylock
-change the relock to be unconditional in low_util_free_page
-only expose can_shrink_thp, abstract the thp_utilization and bucket logic to be private to mm/thp_utilization.c
v2 to v3
-put_page() after trylock_page in low_util_free_page. put() to be called after get() call
-removed spin_unlock_irq in low_util_free_page above LRU_SKIP. There was a double unlock.
-moved spin_unlock_irq() to below list_lru_isolate() in low_util_free_page. This is to shorten the critical section.
-moved lock_page in add_underutilized_thp such that we only lock when allocating and adding to the list_lru
-removed list_lru_alloc in list_lru_add_page and list_lru_delete_page as these are no longer needed.
v1 to v2
-reversed ordering of is_transparent_hugepage and PageAnon in is_anon_transparent_hugepage, page->mapping is only meaningful for user pages
-only trigger the unmap_clean/zap in split_huge_page on anonymous THPs. We cannot zap zero pages for file THPs.
-modified split_huge_page self test based off more recent changes.
-Changed lru_lock to be irq safe. Added irq_save and restore around list_lru adds/deletes.
-Changed low_util_free_page() to trylock the page, and if it fails, unlock lru_lock and return LRU_SKIP. This is to avoid deadlock between reclaim, which calls split_huge_page() and the THP Shrinker
-Changed low_util_free_page() to unlock lru_lock, split_huge_page, then lock lru_lock. This way split_huge_page is not called with the lru_lock held. That leads to deadlock as split_huge_page calls on_each_cpu_mask
-Changed list_lru_shrink_walk to list_lru_shrink_walk_irq.
RFC to v1
-refactored out the code to obtain the thp_utilization_bucket, as that now has to be used in multiple places.
-added support to map to the read only zero page when splitting a THP registered with userfaultfd.
Hm. I just stumbled over QEMU background snapshot code again:
What QEMU does for background snapshots is the following:
1) Read-access all guest memory so we have something mapped.
(ram_write_tracking_prepare()->ram_block_populate_read())
2) Register uffd-wp on all guest memory and uffd-wp protect it
(ram_write_tracking_start()).
So if you split a THP and discard zeropages after 1), but before 2), the
background snapshot might be messed up: instead of zeroes inside the
background snapshot we might find modifications that happened after
starting the snapshot, because uffd-wp protection of some pages was
impossible.
What QEMU does before all that is sense if uffd-wp is possible for all
guest memory by temporarily register uffd-wp and then unregsitering it
again (ram_write_tracking_compatible()).
Maybe we could use that (any uffd-wp registration happened for the
process) as an indication whether we have to be careful and not discard
zero pages ...
--
Thanks,
David / dhildenb