On Tue, 22 Nov 2022, Hugh Dickins wrote: > It's hard to add a page_add_anon_rmap() into __split_huge_pmd_locked()'s > HPAGE_PMD_NR set_pte_at() loop, without wincing at the "freeze" case's > HPAGE_PMD_NR page_remove_rmap() loop below it. No problem here, but I did later learn something worth sharing. Comparing before and after vmstats for the series, I was worried to find the thp_deferred_split_page count consistently much lower afterwards (10%? 1%?), and thought maybe the COMPOUND_MAPPED patch had messed up the accounting for when to call deferred_split_huge_page(). But no: that's as before. We can debate sometime whether it could do a better job - the vast majority of calls to deferred_split_huge_page() are just repeats - but that's a different story, one I'm not keen to get into at the moment. > - if (freeze) { > - for (i = 0; i < HPAGE_PMD_NR; i++) { > - page_remove_rmap(page + i, vma, false); > - put_page(page + i); > - } > - } The reason for the lower thp_deferred_split_page (at least in the kind of testing I was doing) was a very good thing: those page_remove_rmap() calls from __split_huge_pmd_locked() had very often been adding the page to the deferred split queue, precisely while it was already being split. The list management is such that there was no corruption, and splitting calls from the split queue itself did not reach the point of bumping up the thp_deferred_split_page count; but off-queue splits would add the page before deleting it again, adding lots of noise to the count, and unnecessary contention on the queue lock I presume. Hugh