Re: [PATCH v3 10/18] mm: Allow non-hugetlb large folios to be batch processed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 06, 2024 at 01:41:13PM -0500, Zi Yan wrote:
> I had a chat with willy on the deferred list mis-handling. Current migration
> code (starting from commit 616b8371539a6 ("mm: thp: enable thp migration in
> generic path")) does not properly handle THP and mTHP on the deferred list.
> So if the source folio is on the deferred list, after migration,
> the destination folio will not. But this seems a benign bug, since
> the opportunity of splitting a partially mapped THP/mTHP is gone.
> 
> In terms of potential races, the source folio refcount is elevated before
> migration, deferred_split_scan() can move the folio off the deferred_list,
> but cannot split it. During folio_migrate_mapping() when folio is frozen,
> deferred_split_scan() cannot move the folio off the deferred_list to begin
> with.
> 
> I am going to send a patch to fix the deferred_list handling in migration,
> but it seems not be related to the bug in this email thread.

... IOW the source folio remains on the deferred list until its
refcount goes to 0, at which point we call folio_undo_large_rmappable()
and remove it from the deferred list.

A different line of enquiry might be the "else /* We lost race with
folio_put() */" in deferred_split_scan().  If somebody froze the
refcount, we can lose track of a deferred-split folio.  But I think
that's OK too.  The only places which freeze a folio are vmscan (about
to free), folio_migrate_mapping() (discussed above), and page splitting.
In none of these cases do we want to keep the folio on the deferred
split list because we're either freeing it, migrating it or splitting
it.

Oh, and there's something in s390 that I can't be bothered to look at.


Hang on, I think I see it.  It is a race between folio freeing and
deferred_split_scan(), but page migration is absolved.  Look:

CPU 1: deferred_split_scan:
spin_lock_irqsave(split_queue_lock)
list_for_each_entry_safe()
folio_try_get()
list_move(&folio->_deferred_list, &list);
spin_unlock_irqrestore(split_queue_lock)
list_for_each_entry_safe() {
	folio_trylock() <- fails
	folio_put(folio);

CPU 2: folio_put:
folio_undo_large_rmappable
        ds_queue = get_deferred_split_queue(folio);
        spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
                list_del_init(&folio->_deferred_list);
*** at this point CPU 1 is not holding the split_queue_lock; the
folio is on the local list.  Which we just corrupted ***

Now anything can happen.  It's a pretty tight race that involves at
least two CPUs (CPU 2 might have been the one to have the folio locked
at the time CPU 1 caalled folio_trylock()).  But I definitely widened
the window by moving the decrement of the refcount and the removal from
the deferred list further apart.


OK, so what's the solution here?  Personally I favour using a
folio_batch in deferred_split_scan() to hold the folios that we're
going to try to remove instead of a linked list.  Other ideas that are
perhaps less intrusive?




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux