From: Miaohe Lin <linmiaohe@xxxxxxxxxx> [ Upstream commit ed913b055a74b723976f8e885a3395162a0371e6 ] If make_device_exclusive_range() fails or returns pages marked for exclusive access less than required, remaining fields of pages will left uninitialized. So dmirror_atomic_map() will access those yet uninitialized fields of pages. To fix it, do dmirror_atomic_map() iff all pages are marked for exclusive access (we will break if mapped is less than required anyway) so we won't access those uninitialized fields of pages. Link: https://lkml.kernel.org/r/20220609130835.35110-1-linmiaohe@xxxxxxxxxx Fixes: b659baea7546 ("mm: selftests for exclusive device memory") Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> Cc: Jerome Glisse <jglisse@xxxxxxxxxx> Cc: Alistair Popple <apopple@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxx> Cc: Ralph Campbell <rcampbell@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> --- lib/test_hmm.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index ac794e354069..a89cb4281c9d 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -731,7 +731,7 @@ static int dmirror_exclusive(struct dmirror *dmirror, mmap_read_lock(mm); for (addr = start; addr < end; addr = next) { - unsigned long mapped; + unsigned long mapped = 0; int i; if (end < addr + (ARRAY_SIZE(pages) << PAGE_SHIFT)) @@ -740,7 +740,13 @@ static int dmirror_exclusive(struct dmirror *dmirror, next = addr + (ARRAY_SIZE(pages) << PAGE_SHIFT); ret = make_device_exclusive_range(mm, addr, next, pages, NULL); - mapped = dmirror_atomic_map(addr, next, pages, dmirror); + /* + * Do dmirror_atomic_map() iff all pages are marked for + * exclusive access to avoid accessing uninitialized + * fields of pages. + */ + if (ret == (next - addr) >> PAGE_SHIFT) + mapped = dmirror_atomic_map(addr, next, pages, dmirror); for (i = 0; i < ret; i++) { if (pages[i]) { unlock_page(pages[i]); -- 2.35.1