Re: [PATCH v2] mm/migrate: fix shmem xarray update during migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4 Mar 2025, at 15:07, Zi Yan wrote:

> On 4 Mar 2025, at 12:18, Zi Yan wrote:
>
>> On 4 Mar 2025, at 4:47, Hugh Dickins wrote:
>>
>>> On Fri, 28 Feb 2025, Zi Yan wrote:
>>>
>>>> Pagecache uses multi-index entries for large folio, so does shmem. Only
>>>> swap cache still stores multiple entries for a single large folio.
>>>> Commit fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly")
>>>> fixed swap cache but got shmem wrong by storing multiple entries for
>>>> a large shmem folio. Fix it by storing a single entry for a shmem
>>>> folio.
>>>>
>>>> Fixes: fc346d0a70a1 ("mm: migrate high-order folios in swap cache correctly")
>>>> Reported-by: Liu Shixin <liushixin2@xxxxxxxxxx>
>>>> Closes: https://lore.kernel.org/all/28546fb4-5210-bf75-16d6-43e1f8646080@xxxxxxxxxx/
>>>> Signed-off-by: Zi Yan <ziy@xxxxxxxxxx>
>>>> Reviewed-by: Shivank Garg <shivankg@xxxxxxx>
>>>
>>> It's a great find (I think), and your commit message is okay:
>>> but unless I'm much mistaken, NAK to the patch itself.
>>
>> Got it. Thank you for the review.
>>
>>>
>>> First, I say "(I think)" there, because I don't actually know what the
>>> loop writing the same folio nr times to the multi-index entry does to
>>> the xarray: I can imagine it as being completely harmless, just nr
>>> times more work than was needed.
>
> It seems that you are right on this one. I am trying to reproduce the
> issue on mainline but could not and I did see shmem hits the entries = nr.
> So it is likely there is no bug in mainline just inefficiency.
>
> This fix might just mask the bugs introduced in my folio_split() patchset,
> since I reverted my xas_try_split() in shmem_large_split_entry() patch
> and still hit the issue. Let me do more debugging and get back.

I need to take this back. It turns out I did not turn on large folio on
shmem when I was testing 6.14-rc5. After turning on 64KB only large folio
on shmem, shmem swapin got stuck using the repro from Liu Shixin (running
compact_memory all the time then doing linear shmem swapin). But if I
turn on 2MB large folio on shmem, there is no issue.

I get no issue with v6.13 either. So this issue seems from 6.14-rc. I am going
to rebase my folio_split() patchset on v6.13 to test the uniform split
part (the non-uniform part would need Baolin’s patchset).

>
>>>
>>> But I guess it does something bad, since Matthew was horrified,
>>> and we have all found that your patch appears to improve behaviour
>>> (or at least improve behaviour in the context of your folio_split()
>>> series: none of us noticed a problem before that, but it may be
>>> that your new series is widening our exposure to existing bugs).
>>>
>>> Maybe your orginal patch, with the shmem_mapping(mapping) check there,
>>> was good, and it's only wrong when changed to !folio_test_anon(folio);
>>> but TBH I find it too confusing, with the conditionals the way they are.
>>> See my preferred alternative below.
>>>
>>> The vital point is that multi-index entries are not used in swap cache:
>>> whether the folio in question orginates from anon or from shmem.  And
>>> it's easier to understand once you remember that a shmem folio is never
>>> in both page cache and swap cache at the same time (well, there may be an
>>> instant of transition from one to other while that folio is held locked) -
>>> once it's in swap cache, folio->mapping is NULL and it's no longer
>>> recognizable as from a shmem mapping.
>>
>> Got it. Now it all makes sense to me. Thank you for the explanation.
>>
>>>
>>> The way I read your patch originally, I thought it meant that shmem
>>> folios go into the swap cache as multi-index, but anon folios do not;
>>> which seemed a worrying mixture to me.  But crashes on the
>>> VM_BUG_ON_PAGE(entry != folio, entry) in __delete_from_swap_cache()
>>> yesterday (with your patch in) led me to see how add_to_swap_cache()
>>> inserts multiple non-multi-index entries, whether for anon or for shmem.
>>
>> Thanks for the pointer.
>>
>>>
>>> If this patch really is needed in old releases, then I suspect that
>>> mm/huge_memory.c needs correction there too; but let me explain in
>>> a response to your folio_split() series.
>>>
>>>> ---
>>>>  mm/migrate.c | 6 +++++-
>>>>  1 file changed, 5 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>>> index 365c6daa8d1b..2c9669135a38 100644
>>>> --- a/mm/migrate.c
>>>> +++ b/mm/migrate.c
>>>> @@ -524,7 +524,11 @@ static int __folio_migrate_mapping(struct address_space *mapping,
>>>>  			folio_set_swapcache(newfolio);
>>>>  			newfolio->private = folio_get_private(folio);
>>>>  		}
>>>> -		entries = nr;
>>>> +		/* shmem uses high-order entry */
>>>> +		if (!folio_test_anon(folio))
>>>> +			entries = 1;
>>>> +		else
>>>> +			entries = nr;
>>>>  	} else {
>>>>  		VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
>>>>  		entries = 1;
>>>> -- 
>>>> 2.47.2
>>>
>>> NAK to that patch above, here's how I think it should be:
>>
>> OK. I will resend your fix with __split_huge_page() fixes against Linus’s tree.
>> My folio_split() will conflict with the fix, but the merge fix should be
>> simple, since the related patch just deletes __split_huge_page() entirely.
>
> Best Regards,
> Yan, Zi


Best Regards,
Yan, Zi





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux