On 15.12.23 03:26, Yin, Fengwei wrote:
On 12/11/2023 11:56 PM, David Hildenbrand wrote:
Let's mimic what we did with folio_add_file_rmap_*() so we can similarly
replace page_add_anon_rmap() next.
Make the compiler always special-case on the granularity by using
__always_inline.
Note that the new functions ignore the RMAP_COMPOUND flag, which we will
remove as soon as page_add_anon_rmap() is gone.
Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
Reviewed-by: Yin Fengwei <fengwei.yin@xxxxxxxxx>
With a small question below.
Thanks!
[...]
+ if (flags & RMAP_EXCLUSIVE) {
+ switch (mode) {
+ case RMAP_MODE_PTE:
+ for (i = 0; i < nr_pages; i++)
+ SetPageAnonExclusive(page + i);
+ break;
+ case RMAP_MODE_PMD:
+ SetPageAnonExclusive(page);
+ break;
+ }
+ }
+ for (i = 0; i < nr_pages; i++) {
+ struct page *cur_page = page + i;
+
+ /* While PTE-mapping a THP we have a PMD and a PTE mapping. */
+ VM_WARN_ON_FOLIO((atomic_read(&cur_page->_mapcount) > 0 ||
+ (folio_test_large(folio) &&
+ folio_entire_mapcount(folio) > 1)) &&
+ PageAnonExclusive(cur_page), folio);
+ }
This change will iterate all pages for PMD case. The original behavior
didn't check all pages. Is this change by purpose? Thanks.
Yes, on purpose. I first thought about also separating the code paths
here, but realized that it makes much more sense to check each
individual subpage that is effectively getting mapped by that PMD,
instead of only the head page.
I'll add a comment to the patch description.
--
Cheers,
David / dhildenb