The patch titled Subject: fsdax: hold dax lock over mapping insertion has been added to the -mm mm-unstable branch. Its filename is fsdax-hold-dax-lock-over-mapping-insertion.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/fsdax-hold-dax-lock-over-mapping-insertion.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Dan Williams <dan.j.williams@xxxxxxxxx> Subject: fsdax: hold dax lock over mapping insertion Date: Fri, 14 Oct 2022 16:57:37 -0700 In preparation for dax_insert_entry() to start taking page and pgmap references ensure that page->pgmap is valid by holding the dax_read_lock() over both dax_direct_access() and dax_insert_entry(). I.e. the code that wants to elevate the reference count of a pgmap page from 0 -> 1 must ensure that the pgmap is not exiting and will not start exiting until the proper references have been taken. Link: https://lkml.kernel.org/r/166579185727.2236710.8711235794537270051.stgit@xxxxxxxxxxxxxxxxxxxxxxxxx Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: "Darrick J. Wong" <djwong@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: John Hubbard <jhubbard@xxxxxxxxxx> Cc: Alex Deucher <alexander.deucher@xxxxxxx> Cc: Alistair Popple <apopple@xxxxxxxxxx> Cc: Ben Skeggs <bskeggs@xxxxxxxxxx> Cc: "Christian König" <christian.koenig@xxxxxxx> Cc: Daniel Vetter <daniel@xxxxxxxx> Cc: Dave Chinner <david@xxxxxxxxxxxxx> Cc: David Airlie <airlied@xxxxxxxx> Cc: Felix Kuehling <Felix.Kuehling@xxxxxxx> Cc: Jerome Glisse <jglisse@xxxxxxxxxx> Cc: Karol Herbst <kherbst@xxxxxxxxxx> Cc: kernel test robot <lkp@xxxxxxxxx> Cc: Lyude Paul <lyude@xxxxxxxxxx> Cc: "Pan, Xinhui" <Xinhui.Pan@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- fs/dax.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) --- a/fs/dax.c~fsdax-hold-dax-lock-over-mapping-insertion +++ a/fs/dax.c @@ -1107,10 +1107,9 @@ static int dax_iomap_direct_access(const size_t size, void **kaddr, pfn_t *pfnp) { pgoff_t pgoff = dax_iomap_pgoff(iomap, pos); - int id, rc = 0; long length; + int rc = 0; - id = dax_read_lock(); length = dax_direct_access(iomap->dax_dev, pgoff, PHYS_PFN(size), DAX_ACCESS, kaddr, pfnp); if (length < 0) { @@ -1135,7 +1134,6 @@ out_check_addr: if (!*kaddr) rc = -EFAULT; out: - dax_read_unlock(id); return rc; } @@ -1591,7 +1589,7 @@ static vm_fault_t dax_fault_iter(struct loff_t pos = (loff_t)xas->xa_index << PAGE_SHIFT; bool write = iter->flags & IOMAP_WRITE; unsigned long entry_flags = pmd ? DAX_PMD : 0; - int err = 0; + int err = 0, id; pfn_t pfn; void *kaddr; @@ -1611,11 +1609,15 @@ static vm_fault_t dax_fault_iter(struct return pmd ? VM_FAULT_FALLBACK : VM_FAULT_SIGBUS; } + id = dax_read_lock(); err = dax_iomap_direct_access(iomap, pos, size, &kaddr, &pfn); - if (err) + if (err) { + dax_read_unlock(id); return pmd ? VM_FAULT_FALLBACK : dax_fault_return(err); + } *entry = dax_insert_entry(xas, vmf, iter, *entry, pfn, entry_flags); + dax_read_unlock(id); if (write && srcmap->type != IOMAP_HOLE && srcmap->addr != iomap->addr) { _ Patches currently in -mm which might be from dan.j.williams@xxxxxxxxx are fsdax-wait-on-page-not-page-_refcount.patch fsdax-use-dax_page_idle-to-document-dax-busy-page-checking.patch fsdax-include-unmapped-inodes-for-page-idle-detection.patch fsdax-introduce-dax_zap_mappings.patch fsdax-wait-for-pinned-pages-during-truncate_inode_pages_final.patch fsdax-validate-dax-layouts-broken-before-truncate.patch fsdax-hold-dax-lock-over-mapping-insertion.patch fsdax-update-dax_insert_entry-calling-convention-to-return-an-error.patch fsdax-rework-for_each_mapped_pfn-to-dax_for_each_folio.patch fsdax-introduce-pgmap_request_folios.patch fsdax-rework-dax_insert_entry-calling-convention.patch fsdax-cleanup-dax_associate_entry.patch devdax-minor-warning-fixups.patch devdax-fix-sparse-lock-imbalance-warning.patch libnvdimm-pmem-support-pmem-block-devices-without-dax.patch devdax-move-address_space-helpers-to-the-dax-core.patch devdax-sparse-fixes-for-xarray-locking.patch devdax-sparse-fixes-for-vmfault_t-dax-entry-conversions.patch devdax-sparse-fixes-for-vm_fault_t-in-tracepoints.patch devdax-add-pud-support-to-the-dax-mapping-infrastructure.patch devdax-use-dax_insert_entry-dax_delete_mapping_entry.patch mm-memremap_pages-replace-zone_device_page_init-with-pgmap_request_folios.patch mm-memremap_pages-initialize-all-zone_device-pages-to-start-at-refcount-0.patch mm-meremap_pages-delete-put_devmap_managed_page_refs.patch mm-gup-drop-dax-pgmap-accounting.patch