The quilt patch titled Subject: lib/test_hmm: make dmirror_atomic_map() consume a single page has been removed from the -mm tree. Its filename was lib-test_hmm-make-dmirror_atomic_map-consume-a-single-page.patch This patch was dropped because it was merged into the mm-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: David Hildenbrand <david@xxxxxxxxxx> Subject: lib/test_hmm: make dmirror_atomic_map() consume a single page Date: Wed, 26 Feb 2025 14:22:53 +0100 Patch series "mm: cleanups for device-exclusive entries (hmm)", v2. Some smaller device-exclusive cleanups I have lying around. This patch (of 5): The caller now always passes a single page; let's simplify, and return "0" on success. Link: https://lkml.kernel.org/r/20250226132257.2826043-1-david@xxxxxxxxxx Link: https://lkml.kernel.org/r/20250226132257.2826043-2-david@xxxxxxxxxx Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Reviewed-by: Alistair Popple <apopple@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxxxx> Cc: Jérôme Glisse <jglisse@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- lib/test_hmm.c | 32 ++++++++++---------------------- 1 file changed, 10 insertions(+), 22 deletions(-) --- a/lib/test_hmm.c~lib-test_hmm-make-dmirror_atomic_map-consume-a-single-page +++ a/lib/test_hmm.c @@ -706,34 +706,23 @@ static int dmirror_check_atomic(struct d return 0; } -static int dmirror_atomic_map(unsigned long start, unsigned long end, - struct page **pages, struct dmirror *dmirror) +static int dmirror_atomic_map(unsigned long addr, struct page *page, + struct dmirror *dmirror) { - unsigned long pfn, mapped = 0; - int i; + void *entry; /* Map the migrated pages into the device's page tables. */ mutex_lock(&dmirror->mutex); - for (i = 0, pfn = start >> PAGE_SHIFT; pfn < (end >> PAGE_SHIFT); pfn++, i++) { - void *entry; - - if (!pages[i]) - continue; - - entry = pages[i]; - entry = xa_tag_pointer(entry, DPT_XA_TAG_ATOMIC); - entry = xa_store(&dmirror->pt, pfn, entry, GFP_ATOMIC); - if (xa_is_err(entry)) { - mutex_unlock(&dmirror->mutex); - return xa_err(entry); - } - - mapped++; + entry = xa_tag_pointer(page, DPT_XA_TAG_ATOMIC); + entry = xa_store(&dmirror->pt, addr >> PAGE_SHIFT, entry, GFP_ATOMIC); + if (xa_is_err(entry)) { + mutex_unlock(&dmirror->mutex); + return xa_err(entry); } mutex_unlock(&dmirror->mutex); - return mapped; + return 0; } static int dmirror_migrate_finalize_and_map(struct migrate_vma *args, @@ -803,8 +792,7 @@ static int dmirror_exclusive(struct dmir break; } - ret = dmirror_atomic_map(addr, addr + PAGE_SIZE, &page, dmirror); - ret = ret == 1 ? 0 : -EBUSY; + ret = dmirror_atomic_map(addr, page, dmirror); folio_unlock(folio); folio_put(folio); } _ Patches currently in -mm which might be from david@xxxxxxxxxx are mm-factor-out-large-folio-handling-from-folio_order-into-folio_large_order.patch mm-factor-out-large-folio-handling-from-folio_nr_pages-into-folio_large_nr_pages.patch mm-let-_folio_nr_pages-overlay-memcg_data-in-first-tail-page.patch mm-let-_folio_nr_pages-overlay-memcg_data-in-first-tail-page-fix.patch mm-move-hugetlb-specific-things-in-folio-to-page.patch mm-move-_pincount-in-folio-to-page-on-32bit.patch mm-move-_entire_mapcount-in-folio-to-page-on-32bit.patch mm-rmap-pass-dst_vma-to-folio_dup_file_rmap_pte-and-friends.patch mm-rmap-pass-vma-to-__folio_add_rmap.patch mm-rmap-abstract-large-mapcount-operations-for-large-folios-hugetlb.patch bit_spinlock-__always_inline-unlock-functions.patch mm-rmap-use-folio_large_nr_pages-in-add-remove-functions.patch mm-rmap-basic-mm-owner-tracking-for-large-folios-hugetlb.patch mm-copy-on-write-cow-reuse-support-for-pte-mapped-thp.patch mm-convert-folio_likely_mapped_shared-to-folio_maybe_mapped_shared.patch mm-config_no_page_mapcount-to-prepare-for-not-maintain-per-page-mapcounts-in-large-folios.patch fs-proc-page-remove-per-page-mapcount-dependency-for-proc-kpagecount-config_no_page_mapcount.patch fs-proc-task_mmu-remove-per-page-mapcount-dependency-for-pm_mmap_exclusive-config_no_page_mapcount.patch fs-proc-task_mmu-remove-per-page-mapcount-dependency-for-mapmax-config_no_page_mapcount.patch fs-proc-task_mmu-remove-per-page-mapcount-dependency-for-smaps-smaps_rollup-config_no_page_mapcount.patch mm-stop-maintaining-the-per-page-mapcount-of-large-folios-config_no_page_mapcount.patch