Okay, so your speculation right now is:
1) The change in cacheline might be problematic.
2) The additional atomic operation might be problematic.
then measure the split (by e.g. mprotect() at offset 1M on a 4K?) time it
takes before/after this patch.
I can certainly try getting some numbers on that. If you're aware of other
micro-benchmarks that would likely notice slower pte-mapping of THPs, please
let me know.
Thanks.
If I effectively only measure the real PTE->PMD remapping (only measure
the for loop that mprotects() one 4k page inside each of 512 THPs )
without any of the mmap+populate+munmap, I can certainly measure a real
difference.
I briefly looked at some perf data across the overall benchmark runtime.
For page_remove_rmap(), the new atomic_dec() doesn't seem to be
significant. Data indicates that it's significantly less relevant than a
later atomic_add_negative().
For page_add_anon_rmap(), it's a bit fuzzy. Definitely, the
atomic_inc_return_relaxed(mapped) seems to stick out, but I cannot rule
out that the atomic_add() also plays a role.
The PTE->PMD remapping effectively does (__split_huge_pmd_locked())
for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
...
page_add_anon_rmap(page + i, vma, addr, RMAP_NONE);
...
}
...
page_remove_rmap(page, vma, true);
Inside that loop we're repeatedly accessing the total_mapcount and
_nr_pages_mapped. So my best guess would have been that both are already
hot in the cache.
RMAP batching certainly sounds like a good idea for
__split_huge_pmd_locked(), independent of this patch.
What would probably also interesting is observing happens when we unmap
a single PTE of a THP and we cannot batch, to see if the
page_remove_rmap() matters in the bigger scale.
I'll do some more digging tomorrow to clarify some details. Running some
kernel compile tests with thp=always at least didn't reveal any
surprises so far.
--
Cheers,
David / dhildenb