If a compound page is being split while dump_page() is being run on that page, we can end up calling compound_mapcount() on a page that is no longer compound. This leads to a crash (already seen at least once in the field), due to the VM_BUG_ON_PAGE() assertion inside compound_mapcount(). (The above is from Matthew Wilcox's analysis of Qian Cai's bug report.) A similar problem is possible, via compound_pincount() instead of compound_mapcount(). In order to avoid this kind of crash, make dump_page() slightly more robust, by providing a pair of simpler routines that don't contain assertions: head_mapcount() and head_pincount(). For debug tools, we don't want to go *too* far in this direction, but this is a simple small fix, and the crash has already been seen, so it's a good trade-off. Reported-by: Qian Cai <cai@xxxxxx> Suggested-by: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Vlastimil Babka <vbabka@xxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Signed-off-by: John Hubbard <jhubbard@xxxxxxxxxx> --- Hi, I'm assuming that a fix is not required for -stable, but let me know if others feel differently. The dump_page() code has changed a lot in that area. Changes since v1 [1]: 1) Rebased onto mmotm 2) Used a simpler head_*count() approach. 3) Added Matthew's Suggested-by: tag 4) Support pincount as well as mapcount. [1] https://lore.kernel.org/linux-mm/20200804183943.1244828-1-jhubbard@xxxxxxxxxx/ thanks, John Hubbard include/linux/mm.h | 14 ++++++++++++-- mm/debug.c | 6 +++--- 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 4f12b2465e80..8ab941cf73f4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -776,6 +776,11 @@ static inline void *kvcalloc(size_t n, size_t size, gfp_t flags) extern void kvfree(const void *addr); extern void kvfree_sensitive(const void *addr, size_t len); +static inline int head_mapcount(struct page *head) +{ + return atomic_read(compound_mapcount_ptr(head)) + 1; +} + /* * Mapcount of compound page as a whole, does not include mapped sub-pages. * @@ -785,7 +790,7 @@ static inline int compound_mapcount(struct page *page) { VM_BUG_ON_PAGE(!PageCompound(page), page); page = compound_head(page); - return atomic_read(compound_mapcount_ptr(page)) + 1; + return head_mapcount(page); } /* @@ -898,11 +903,16 @@ static inline bool hpage_pincount_available(struct page *page) return PageCompound(page) && compound_order(page) > 1; } +static inline int head_pincount(struct page *head) +{ + return atomic_read(compound_pincount_ptr(head)); +} + static inline int compound_pincount(struct page *page) { VM_BUG_ON_PAGE(!hpage_pincount_available(page), page); page = compound_head(page); - return atomic_read(compound_pincount_ptr(page)); + return head_pincount(page); } static inline void set_compound_order(struct page *page, unsigned int order) diff --git a/mm/debug.c b/mm/debug.c index c27fff1e3ca8..69b60637112b 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -102,12 +102,12 @@ void __dump_page(struct page *page, const char *reason) if (hpage_pincount_available(page)) { pr_warn("head:%p order:%u compound_mapcount:%d compound_pincount:%d\n", head, compound_order(head), - compound_mapcount(head), - compound_pincount(head)); + head_mapcount(head), + head_pincount(head)); } else { pr_warn("head:%p order:%u compound_mapcount:%d\n", head, compound_order(head), - compound_mapcount(head)); + head_mapcount(head)); } } if (PageKsm(page)) -- 2.28.0