The patch titled Subject: mm-remove-an-avoidable-load-of-page-refcount-in-page_ref_add_unless-fix has been added to the -mm mm-unstable branch. Its filename is mm-remove-an-avoidable-load-of-page-refcount-in-page_ref_add_unless-fix.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-remove-an-avoidable-load-of-page-refcount-in-page_ref_add_unless-fix.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Subject: mm-remove-an-avoidable-load-of-page-refcount-in-page_ref_add_unless-fix Date: Mon Dec 9 07:53:02 PM PST 2024 add comment from David Link: https://lkml.kernel.org/r/f5a65bf5-5105-4376-9c1c-164a15a4ab79@xxxxxxxxxx Cc: Mateusz Guzik <mjguzik@xxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Yu Zhao <yuzhao@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/page_ref.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) --- a/include/linux/page_ref.h~mm-remove-an-avoidable-load-of-page-refcount-in-page_ref_add_unless-fix +++ a/include/linux/page_ref.h @@ -234,8 +234,15 @@ static inline bool page_ref_add_unless(s rcu_read_lock(); /* avoid writing to the vmemmap area being remapped */ - if (!page_is_fake_head(page)) + if (!page_is_fake_head(page)) { + /* + * atomic_add_unless() will currently never modify the value + * if it already is u. If that ever changes, we'd have to have + * a separate check here, such that we won't be writing to + * write-protected vmemmap areas. + */ ret = atomic_add_unless(&page->_refcount, nr, u); + } rcu_read_unlock(); if (page_ref_tracepoint_active(page_ref_mod_unless)) _ Patches currently in -mm which might be from akpm@xxxxxxxxxxxxxxxxxxxx are mm-vmscan-account-for-free-pages-to-prevent-infinite-loop-in-throttle_direct_reclaim-checkpatch-fixes.patch mm-swap_cgroup-allocate-swap_cgroup-map-using-vcalloc-fix.patch mm-page_alloc-add-some-detailed-comments-in-can_steal_fallback-fix.patch mm-introduce-mmap_lock_speculate_try_beginretry-fix.patch mm-damon-tests-vaddr-kunith-reduce-stack-consumption.patch mm-remove-an-avoidable-load-of-page-refcount-in-page_ref_add_unless-fix.patch xarray-port-tests-to-kunit-fix.patch fault-inject-use-prandom-where-cryptographically-secure-randomness-is-not-needed-fix.patch