Re: [PATCH 1/3] mm: Move arch_do_swap_page() call to before swap_free()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 16, 2023 at 5:35 AM David Hildenbrand <david@xxxxxxxxxx> wrote:
>
> On 16.05.23 01:40, Peter Collingbourne wrote:
> > On Mon, May 15, 2023 at 06:34:30PM +0100, Catalin Marinas wrote:
> >> On Sat, May 13, 2023 at 05:29:53AM +0200, David Hildenbrand wrote:
> >>> On 13.05.23 01:57, Peter Collingbourne wrote:
> >>>> diff --git a/mm/memory.c b/mm/memory.c
> >>>> index 01a23ad48a04..83268d287ff1 100644
> >>>> --- a/mm/memory.c
> >>>> +++ b/mm/memory.c
> >>>> @@ -3914,19 +3914,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >>>>                    }
> >>>>            }
> >>>> -  /*
> >>>> -   * Remove the swap entry and conditionally try to free up the swapcache.
> >>>> -   * We're already holding a reference on the page but haven't mapped it
> >>>> -   * yet.
> >>>> -   */
> >>>> -  swap_free(entry);
> >>>> -  if (should_try_to_free_swap(folio, vma, vmf->flags))
> >>>> -          folio_free_swap(folio);
> >>>> -
> >>>> -  inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
> >>>> -  dec_mm_counter(vma->vm_mm, MM_SWAPENTS);
> >>>>            pte = mk_pte(page, vma->vm_page_prot);
> >>>> -
> >>>>            /*
> >>>>             * Same logic as in do_wp_page(); however, optimize for pages that are
> >>>>             * certainly not shared either because we just allocated them without
> >>>> @@ -3946,8 +3934,21 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >>>>                    pte = pte_mksoft_dirty(pte);
> >>>>            if (pte_swp_uffd_wp(vmf->orig_pte))
> >>>>                    pte = pte_mkuffd_wp(pte);
> >>>> +  arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
> >>>>            vmf->orig_pte = pte;
> >>>> +  /*
> >>>> +   * Remove the swap entry and conditionally try to free up the swapcache.
> >>>> +   * We're already holding a reference on the page but haven't mapped it
> >>>> +   * yet.
> >>>> +   */
> >>>> +  swap_free(entry);
> >>>> +  if (should_try_to_free_swap(folio, vma, vmf->flags))
> >>>> +          folio_free_swap(folio);
> >>>> +
> >>>> +  inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
> >>>> +  dec_mm_counter(vma->vm_mm, MM_SWAPENTS);
> >>>> +
> >>>>            /* ksm created a completely new copy */
> >>>>            if (unlikely(folio != swapcache && swapcache)) {
> >>>>                    page_add_new_anon_rmap(page, vma, vmf->address);
> >>>> @@ -3959,7 +3960,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >>>>            VM_BUG_ON(!folio_test_anon(folio) ||
> >>>>                            (pte_write(pte) && !PageAnonExclusive(page)));
> >>>>            set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
> >>>> -  arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
> >>>>            folio_unlock(folio);
> >>>>            if (folio != swapcache && swapcache) {
> >>>
> >>>
> >>> You are moving the folio_free_swap() call after the folio_ref_count(folio)
> >>> == 1 check, which means that such (previously) swapped pages that are
> >>> exclusive cannot be detected as exclusive.
> >>>
> >>> There must be a better way to handle MTE here.
> >>>
> >>> Where are the tags stored, how is the location identified, and when are they
> >>> effectively restored right now?
> >>
> >> I haven't gone through Peter's patches yet but a pretty good description
> >> of the problem is here:
> >> https://lore.kernel.org/all/5050805753ac469e8d727c797c2218a9d780d434.camel@xxxxxxxxxxxx/.
> >> I couldn't reproduce it with my swap setup but both Qun-wei and Peter
> >> triggered it.
> >
> > In order to reproduce this bug it is necessary for the swap slot cache
> > to be disabled, which is unlikely to occur during normal operation. I
> > was only able to reproduce the bug by disabling it forcefully with the
> > following patch:
> >
> > diff --git a/mm/swap_slots.c b/mm/swap_slots.c
> > index 0bec1f705f8e0..25afba16980c7 100644
> > --- a/mm/swap_slots.c
> > +++ b/mm/swap_slots.c
> > @@ -79,7 +79,7 @@ void disable_swap_slots_cache_lock(void)
> >
> >   static void __reenable_swap_slots_cache(void)
> >   {
> > -     swap_slot_cache_enabled = has_usable_swap();
> > +     swap_slot_cache_enabled = false;
> >   }
> >
> >   void reenable_swap_slots_cache_unlock(void)
> >
> > With that I can trigger the bug on an MTE-utilizing process by running
> > a program that enumerates the process's private anonymous mappings and
> > calls process_madvise(MADV_PAGEOUT) on all of them.
> >
> >> When a tagged page is swapped out, the arm64 code stores the metadata
> >> (tags) in a local xarray indexed by the swap pte. When restoring from
> >> swap, the arm64 set_pte_at() checks this xarray using the old swap pte
> >> and spills the tags onto the new page. Apparently something changed in
> >> the kernel recently that causes swap_range_free() to be called before
> >> set_pte_at(). The arm64 arch_swap_invalidate_page() frees the metadata
> >> from the xarray and the subsequent set_pte_at() won't find it.
> >>
> >> If we have the page, the metadata can be restored before set_pte_at()
> >> and I guess that's what Peter is trying to do (again, I haven't looked
> >> at the details yet; leaving it for tomorrow).
> >>
> >> Is there any other way of handling this? E.g. not release the metadata
> >> in arch_swap_invalidate_page() but later in set_pte_at() once it was
> >> restored. But then we may leak this metadata if there's no set_pte_at()
> >> (the process mapping the swap entry died).
> >
> > Another problem that I can see with this approach is that it does not
> > respect reference counts for swap entries, and it's unclear whether that
> > can be done in a non-racy fashion.
> >
> > Another approach that I considered was to move the hook to swap_readpage()
> > as in the patch below (sorry, it only applies to an older version
> > of Android's android14-6.1 branch and not mainline, but you get the
> > idea). But during a stress test (running the aforementioned program that
> > calls process_madvise(MADV_PAGEOUT) in a loop during an Android "monkey"
> > test) I discovered the following racy use-after-free that can occur when
> > two tasks T1 and T2 concurrently restore the same page:
> >
> > T1:                  | T2:
> > arch_swap_readpage() |
> >                       | arch_swap_readpage() -> mte_restore_tags() -> xe_load()
> > swap_free()          |
> >                       | arch_swap_readpage() -> mte_restore_tags() -> mte_restore_page_tags()
> >
> > We can avoid it by taking the swap_info_struct::lock spinlock in
> > mte_restore_tags(), but it seems like it would lead to lock contention.
> >
>
> Would the idea be to fail swap_readpage() on the one that comes last,
> simply retrying to lookup the page?

The idea would be that T2's arch_swap_readpage() could potentially not
find tags if it ran after swap_free(), so T2 would produce a page
without restored tags. But that wouldn't matter, because T1 reaching
swap_free() means that T2 will follow the goto at [1] after waiting
for T1 to unlock at [2], and T2's page will be discarded.

> This might be a naive question, but how does MTE play along with shared
> anonymous pages?

It should work fine. shmem_writepage() calls swap_writepage() which
calls arch_prepare_to_swap() to write the tags. And
shmem_swapin_folio() has a call to arch_swap_restore() to restore
them.

Peter

[1] https://github.com/torvalds/linux/blob/f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6/mm/memory.c#L3881
[2] https://github.com/torvalds/linux/blob/f1fcbaa18b28dec10281551dfe6ed3a3ed80e3d6/mm/memory.c#L4006




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux