The patch titled Subject: lib/test_hmm: avoid accessing uninitialized pages has been added to the -mm mm-unstable branch. Its filename is lib-test_hmm-avoid-accessing-uninitialized-pages.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/lib-test_hmm-avoid-accessing-uninitialized-pages.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Miaohe Lin <linmiaohe@xxxxxxxxxx> Subject: lib/test_hmm: avoid accessing uninitialized pages Date: Thu, 9 Jun 2022 21:08:35 +0800 If make_device_exclusive_range() fails or returns pages marked for exclusive access less than required, remaining fields of pages will left uninitialized. So dmirror_atomic_map() will access those yet uninitialized fields of pages. To fix it, do dmirror_atomic_map() iff all pages are marked for exclusive access (we will break if mapped is less than required anyway) so we won't access those uninitialized fields of pages. Link: https://lkml.kernel.org/r/20220609130835.35110-1-linmiaohe@xxxxxxxxxx Fixes: b659baea7546 ("mm: selftests for exclusive device memory") Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> Cc: Jerome Glisse <jglisse@xxxxxxxxxx> Cc: Alistair Popple <apopple@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxx> Cc: Ralph Campbell <rcampbell@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- lib/test_hmm.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) --- a/lib/test_hmm.c~lib-test_hmm-avoid-accessing-uninitialized-pages +++ a/lib/test_hmm.c @@ -797,7 +797,7 @@ static int dmirror_exclusive(struct dmir mmap_read_lock(mm); for (addr = start; addr < end; addr = next) { - unsigned long mapped; + unsigned long mapped = 0; int i; if (end < addr + (ARRAY_SIZE(pages) << PAGE_SHIFT)) @@ -806,7 +806,13 @@ static int dmirror_exclusive(struct dmir next = addr + (ARRAY_SIZE(pages) << PAGE_SHIFT); ret = make_device_exclusive_range(mm, addr, next, pages, NULL); - mapped = dmirror_atomic_map(addr, next, pages, dmirror); + /* + * Do dmirror_atomic_map() iff all pages are marked for + * exclusive access to avoid accessing uninitialized + * fields of pages. + */ + if (ret == (next - addr) >> PAGE_SHIFT) + mapped = dmirror_atomic_map(addr, next, pages, dmirror); for (i = 0; i < ret; i++) { if (pages[i]) { unlock_page(pages[i]); _ Patches currently in -mm which might be from linmiaohe@xxxxxxxxxx are maintainers-add-myself-as-a-memory-failure-reviewer.patch mm-shmemc-clean-up-comment-of-shmem_swapin_folio.patch mm-reduce-the-rcu-lock-duration.patch mm-migration-remove-unneeded-lock-page-and-pagemovable-check.patch mm-migration-return-errno-when-isolate_huge_page-failed.patch mm-migration-fix-potential-pte_unmap-on-an-not-mapped-pte.patch mm-memremap-fix-wrong-function-name-above-memremap_pages.patch mm-swapfile-make-security_vm_enough_memory_mm-work-as-expected.patch mm-swapfile-fix-possible-data-races-of-inuse_pages.patch mm-swap-remove-swap_cache_info-statistics.patch mm-vmscan-dont-try-to-reclaim-freed-folios.patch lib-test_hmm-avoid-accessing-uninitialized-pages.patch