The patch titled Subject: mm/memory: move page_count() check into validate_page_before_insert() has been added to the -mm mm-unstable branch. Its filename is mm-memory-move-page_count-check-into-validate_page_before_insert.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-memory-move-page_count-check-into-validate_page_before_insert.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: David Hildenbrand <david@xxxxxxxxxx> Subject: mm/memory: move page_count() check into validate_page_before_insert() Date: Wed, 22 May 2024 14:57:11 +0200 Patch series "mm/memory: cleanly support zeropage in vm_insert_page*(), vm_map_pages*() and vmf_insert_mixed()", v2. There is interest in mapping zeropages via vm_insert_pages() [1] into MAP_SHARED mappings. For now, we only get zeropages in MAP_SHARED mappings via vmf_insert_mixed() from FSDAX code, and I think it's a bit shaky in some cases because we refcount the zeropage when mapping it but not necessarily always when unmapping it ... and we should actually never refcount it. It's all a bit tricky, especially how zeropages in MAP_SHARED mappings interact with GUP (FOLL_LONGTERM), mprotect(), write-faults and s390x forbidding the shared zeropage (rewrite [2] s now upstream). This series tries to take the careful approach of only allowing the zeropage where it is likely safe to use (which should cover the existing FSDAX use case and [1]), preventing that it could accidentally get mapped writable during a write fault, mprotect() etc, and preventing issues with FOLL_LONGTERM in the future with other users. Tested with a patch from Vincent that uses the zeropage in context of [1]. [1] https://lkml.kernel.org/r/20240430111354.637356-1-vdonnefort@xxxxxxxxxx [2] https://lkml.kernel.org/r/20240411161441.910170-1-david@xxxxxxxxxx This patch (of 3): We'll now also cover the case where insert_page() is called from __vm_insert_mixed(), which sounds like the right thing to do. Link: https://lkml.kernel.org/r/20240522125713.775114-2-david@xxxxxxxxxx Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Vincent Donnefort <vdonnefort@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) --- a/mm/memory.c~mm-memory-move-page_count-check-into-validate_page_before_insert +++ a/mm/memory.c @@ -1987,6 +1987,8 @@ static int validate_page_before_insert(s { struct folio *folio = page_folio(page); + if (!folio_ref_count(folio)) + return -EINVAL; if (folio_test_anon(folio) || folio_test_slab(folio) || page_has_type(page)) return -EINVAL; @@ -2041,8 +2043,6 @@ static int insert_page_in_batch_locked(s { int err; - if (!page_count(page)) - return -EINVAL; err = validate_page_before_insert(page); if (err) return err; @@ -2176,8 +2176,6 @@ int vm_insert_page(struct vm_area_struct { if (addr < vma->vm_start || addr >= vma->vm_end) return -EFAULT; - if (!page_count(page)) - return -EINVAL; if (!(vma->vm_flags & VM_MIXEDMAP)) { BUG_ON(mmap_read_trylock(vma->vm_mm)); BUG_ON(vma->vm_flags & VM_PFNMAP); _ Patches currently in -mm which might be from david@xxxxxxxxxx are mm-memory-move-page_count-check-into-validate_page_before_insert.patch mm-memory-cleanly-support-zeropage-in-vm_insert_page-vm_map_pages-and-vmf_insert_mixed.patch mm-rmap-sanity-check-that-zeropages-are-not-passed-to-rmap.patch