On 21.05.24 14:57, Brendan Jackman wrote:
These fields are written by memory hotplug under mem_hotplug_lock but
read without any lock. It seems like reader code is robust against the
value being stale or "from the future", but we also need to account
for:
1. Load/store tearing (according to Linus[1], this really happens,
even when everything is aligned as you would hope).
2. Invented loads[2] - the compiler can spill and re-read these fields
([2] calls this "invented loads") and assume that they have not
changed.
Note we don't need READ_ONCE in paths that have the mem_hotplug_lock
for write, but we still need WRITE_ONCE to prevent store-tearing.
[1] https://lore.kernel.org/all/CAHk-=wj2t+GK+DGQ7Xy6U7zMf72e7Jkxn4_-kGyfH3WFEoH+YQ@xxxxxxxxxxxxxx/T/#u
As discovered via the original big-bad article[2]
[2] https://lwn.net/Articles/793253/
Signed-off-by: Brendan Jackman <jackmanb@xxxxxxxxxx>
---
include/linux/mmzone.h | 14 ++++++++++----
mm/compaction.c | 2 +-
mm/memory_hotplug.c | 20 ++++++++++++--------
mm/mm_init.c | 2 +-
mm/page_alloc.c | 2 +-
mm/show_mem.c | 8 ++++----
mm/vmstat.c | 4 ++--
7 files changed, 31 insertions(+), 21 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 194ef7fed9d6..bdb3be76d10c 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1018,11 +1018,13 @@ static inline unsigned long zone_cma_pages(struct zone *zone)
#endif
}
+/* This is unstable unless you hold mem_hotplug_lock. */
static inline unsigned long zone_end_pfn(const struct zone *zone)
{
- return zone->zone_start_pfn + zone->spanned_pages;
+ return zone->zone_start_pfn + READ_ONCE(zone->spanned_pages);
It's weird to apply that logic only to spanned_pages, whereby
zone_start_pfn can (and will) similarly change when onlining/offlining
memory.
--
Cheers,
David / dhildenb