On 09.12.24 15:30, Mateusz Guzik wrote:
On Mon, Dec 9, 2024 at 3:22 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
On 09.12.24 13:33, Mateusz Guzik wrote:
That is to say I think this thread just about exhausted the time
warranted by this patch. No hard feelz if it gets dropped, but then I
do strongly suggest adding a justification to the extra load.
Maybe it's sufficient for now to simply do your change with a comment:
diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h
index 8c236c651d1d6..1efc992ad5687 100644
--- a/include/linux/page_ref.h
+++ b/include/linux/page_ref.h
@@ -234,7 +234,13 @@ static inline bool page_ref_add_unless(struct page *page, int nr, int u)
rcu_read_lock();
/* avoid writing to the vmemmap area being remapped */
- if (!page_is_fake_head(page) && page_ref_count(page) != u)
+ if (!page_is_fake_head(page))
+ /*
+ * atomic_add_unless() will currently never modify the value
+ * if it already is u. If that ever changes, we'd have to have
+ * a separate check here, such that we won't be writing to
+ * write-protected vmemmap areas.
+ */
ret = atomic_add_unless(&page->_refcount, nr, u);
rcu_read_unlock();
It would bail out during testing ... hopefully, such that we can detect any such change.
Not my call to make, but looks good. ;)
fwiw I don't need any credit and I would be more than happy if you
just submitted the thing as your own without me being mentioned. *No*
cc would also be appreciated.
Likely Andrew can add the comment as a fixup.
--
Cheers,
David / dhildenb