This is a note to let you know that I've just added the patch titled mips: use nth_page() in place of direct struct page manipulation to the 6.6-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: mips-use-nth_page-in-place-of-direct-struct-page-manipulation.patch and it can be found in the queue-6.6 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From aa5fe31b6b59210cb4ea28a59e68781f48eeca74 Mon Sep 17 00:00:00 2001 From: Zi Yan <ziy@xxxxxxxxxx> Date: Wed, 13 Sep 2023 16:12:48 -0400 Subject: mips: use nth_page() in place of direct struct page manipulation From: Zi Yan <ziy@xxxxxxxxxx> commit aa5fe31b6b59210cb4ea28a59e68781f48eeca74 upstream. __flush_dcache_pages() is called during hugetlb migration via migrate_pages() -> migrate_hugetlbs() -> unmap_and_move_huge_page() -> move_to_new_folio() -> flush_dcache_folio(). And with hugetlb and without sparsemem vmemmap, struct page is not guaranteed to be contiguous beyond a section. Use nth_page() instead. Without the fix, a wrong address might be used for data cache page flush. No bug is reported. The fix comes from code inspection. Link: https://lkml.kernel.org/r/20230913201248.452081-6-zi.yan@xxxxxxxx Fixes: 15fa3e8e3269 ("mips: implement the new page table range API") Signed-off-by: Zi Yan <ziy@xxxxxxxxxx> Cc: David Hildenbrand <david@xxxxxxxxxx> Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Cc: Mike Rapoport (IBM) <rppt@xxxxxxxxxx> Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx> Cc: Thomas Bogendoerfer <tsbogend@xxxxxxxxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/mips/mm/cache.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/mips/mm/cache.c +++ b/arch/mips/mm/cache.c @@ -117,7 +117,7 @@ void __flush_dcache_pages(struct page *p * get faulted into the tlb (and thus flushed) anyways. */ for (i = 0; i < nr; i++) { - addr = (unsigned long)kmap_local_page(page + i); + addr = (unsigned long)kmap_local_page(nth_page(page, i)); flush_data_cache_page(addr); kunmap_local((void *)addr); } Patches currently in stable-queue which might be from ziy@xxxxxxxxxx are queue-6.6/mm-memory_hotplug-use-pfn-math-in-place-of-direct-struct-page-manipulation.patch queue-6.6/fs-use-nth_page-in-place-of-direct-struct-page-manipulation.patch queue-6.6/mm-cma-use-nth_page-in-place-of-direct-struct-page-manipulation.patch queue-6.6/mips-use-nth_page-in-place-of-direct-struct-page-manipulation.patch queue-6.6/mm-hugetlb-use-nth_page-in-place-of-direct-struct-page-manipulation.patch