On Tue, Mar 8, 2022 at 11:27 AM Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote: > > So I think the fix for this all might be something like the attached > (TOTALLY UNTESTED)! Still entirely untested, but I wrote a commit message for it in the hopes that this actually works and Andreas can verify that it fixes the issue. Same exact patch, it's just now in my local experimental tree as a commit. Linus
From d8c2e0a81274d67edfff3769c4c37e364ba8d6f8 Mon Sep 17 00:00:00 2001 From: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Date: Tue, 8 Mar 2022 11:55:48 -0800 Subject: [PATCH] mm: gup: make fault_in_safe_writeable() use fixup_user_fault() Instedad of using GUP, make fault_in_safe_writeable() actually force a 'handle_mm_fault()' using the same fixup_user_fault() machinery that futexes already use. Using the GUP machinery meant that fault_in_safe_writeable() did not do everything that a real fault would do, ranging from not auto-expanding the stack segment, to not updating accessed or dirty flags in the page tables (GUP sets those flags on the pages themselves). The latter causes problems on architectures (like s390) that do accessed bit handling in software, which meant that fault_in_safe_writeable() didn't actually do all the fault handling it needed to. Reported-by: Andreas Gruenbacher <agruenba@xxxxxxxxxx> Link: https://lore.kernel.org/all/CAHc6FU5nP+nziNGG0JAF1FUx-GV7kKFvM7aZuU_XD2_1v4vnvg@xxxxxxxxxxxxxx/ Cc: David Hildenbrand <david@xxxxxxxxxx> Signed-off-by: inus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> --- mm/gup.c | 40 ++++++++++++---------------------------- 1 file changed, 12 insertions(+), 28 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index a9d4d724aef7..9e085e7b9c28 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1745,44 +1745,28 @@ EXPORT_SYMBOL(fault_in_writeable); size_t fault_in_safe_writeable(const char __user *uaddr, size_t size) { unsigned long start = (unsigned long)untagged_addr(uaddr); - unsigned long end, nstart, nend; + unsigned long end, nstart; struct mm_struct *mm = current->mm; - struct vm_area_struct *vma = NULL; - int locked = 0; + const unsigned int fault_flags = FAULT_FLAG_WRITE | FAULT_FLAG_KILLABLE; + const size_t max_size = 4 * PAGE_SIZE; nstart = start & PAGE_MASK; - end = PAGE_ALIGN(start + size); + end = PAGE_ALIGN(start + min(size, max_size)); if (end < nstart) end = 0; - for (; nstart != end; nstart = nend) { - unsigned long nr_pages; - long ret; - if (!locked) { - locked = 1; - mmap_read_lock(mm); - vma = find_vma(mm, nstart); - } else if (nstart >= vma->vm_end) - vma = vma->vm_next; - if (!vma || vma->vm_start >= end) - break; - nend = end ? min(end, vma->vm_end) : vma->vm_end; - if (vma->vm_flags & (VM_IO | VM_PFNMAP)) - continue; - if (nstart < vma->vm_start) - nstart = vma->vm_start; - nr_pages = (nend - nstart) / PAGE_SIZE; - ret = __get_user_pages_locked(mm, nstart, nr_pages, - NULL, NULL, &locked, - FOLL_TOUCH | FOLL_WRITE); - if (ret <= 0) + mmap_read_lock(mm); + for (; nstart != end; nstart += PAGE_SIZE) { + if (fixup_user_fault(mm, nstart, fault_flags, NULL)) break; - nend = nstart + ret * PAGE_SIZE; } - if (locked) - mmap_read_unlock(mm); + mmap_read_unlock(mm); + + /* If we got all of our (truncated) fault-in, we claim we got it all */ if (nstart == end) return 0; + + /* .. otherwise we'll use the original untruncated size */ return size - min_t(size_t, nstart - start, size); } EXPORT_SYMBOL(fault_in_safe_writeable); -- 2.35.1.356.ge6630f57cf.dirty