On 27.02.24 18:10, Ryan Roberts wrote:
Hi David,
On 26/02/2024 17:41, Ryan Roberts wrote:
On 22/02/2024 10:20, David Hildenbrand wrote:
On 22.02.24 11:19, David Hildenbrand wrote:
On 25.10.23 16:45, Ryan Roberts wrote:
As preparation for supporting small-sized THP in the swap-out path,
without first needing to split to order-0, Remove the CLUSTER_FLAG_HUGE,
which, when present, always implies PMD-sized THP, which is the same as
the cluster size.
The only use of the flag was to determine whether a swap entry refers to
a single page or a PMD-sized THP in swap_page_trans_huge_swapped().
Instead of relying on the flag, we now pass in nr_pages, which
originates from the folio's number of pages. This allows the logic to
work for folios of any order.
The one snag is that one of the swap_page_trans_huge_swapped() call
sites does not have the folio. But it was only being called there to
avoid bothering to call __try_to_reclaim_swap() in some cases.
__try_to_reclaim_swap() gets the folio and (via some other functions)
calls swap_page_trans_huge_swapped(). So I've removed the problematic
call site and believe the new logic should be equivalent.
That is the __try_to_reclaim_swap() -> folio_free_swap() ->
folio_swapped() -> swap_page_trans_huge_swapped() call chain I assume.
The "difference" is that you will now (1) get another temporary
reference on the folio and (2) (try)lock the folio every time you
discard a single PTE of a (possibly) large THP.
Thinking about it, your change will not only affect THP, but any call to
free_swap_and_cache().
Likely that's not what we want. :/
Is folio_trylock() really that expensive given the code path is already locking
multiple spinlocks, and I don't think we would expect the folio lock to be very
contended?
I guess filemap_get_folio() could be a bit more expensive, but again, is this
really a deal-breaker?
I'm just trying to refamiliarize myself with this series, but I think I ended up
allocating a cluster per cpu per order. So one potential solution would be to
turn the flag into a size and store it in the cluster info. (In fact I think I
was doing that in an early version of this series - will have to look at why I
got rid of that). Then we could avoid needing to figure out nr_pages from the folio.
I ran some microbenchmarks to see if these extra operations cause a performance
issue - it all looks OK to me.
Sorry, I'm drowning in reviews right now. I was hoping to get some of my own
stuff figured out today ... maybe tomorrow.
I modified your "pte-mapped-folio-benchmarks" to add a "munmap-swapped-forked"
mode, which prepares the 1G memory mapping by first paging it out with
MADV_PAGEOUT, then it forks a child (and keeps that child alive) so that the
swap slots have 2 references, then it measures the duration of munmap() in the
parent on the entire range. The idea is that free_swap_and_cache() is called for
each PTE during munmap(). Prior to my change, swap_page_trans_huge_swapped()
will return true, due to the child's references, and __try_to_reclaim_swap() is
not called. After my change, we no longer have this short cut.
In both cases the results are within 1% (confirmed across multiple runs of 20
seconds each):
mm-stable: Average: 0.004997
+ change: Average: 0.005037
(these numbers are for Ampere Altra. I also tested on M2 VM - no regression
their either).
Do you still have a concern about this change?
The main concern I had was not about overhead due to atomic operations in the
non-concurrent case that you are measuring.
We might now unnecessarily be incrementing the folio refcount and taking
the folio lock. That will affects large folios in the swapcache now IIUC.
Small folios should be unaffected.
The side effects of that can be:
* Code checking for additional folio reference could now detect some and
back out. (the "mapcount + swapcache*folio_nr_pages != folio_refcount"
stuff)
* Code that might really benefit from trylocking the folio might fail to
do so.
For example, splitting a large folio might now fail more often simply
because some process zaps a swap entry and the additional reference+page
lock were optimized out previously.
How relevant is it? Relevant enough that someone decided to put that
optimization in? I don't know :)
Arguably, zapping a present PTE also leaves the refcount elevated for a while
until the mapcount was freed. But here, it could be avoided.
Digging a bit, it was introduced in:
commit e07098294adfd03d582af7626752255e3d170393
Author: Huang Ying <ying.huang@xxxxxxxxx>
Date: Wed Sep 6 16:22:16 2017 -0700
mm, THP, swap: support to reclaim swap space for THP swapped out
The normal swap slot reclaiming can be done when the swap count reaches
SWAP_HAS_CACHE. But for the swap slot which is backing a THP, all swap
slots backing one THP must be reclaimed together, because the swap slot
may be used again when the THP is swapped out again later. So the swap
slots backing one THP can be reclaimed together when the swap count for
all swap slots for the THP reached SWAP_HAS_CACHE. In the patch, the
functions to check whether the swap count for all swap slots backing one
THP reached SWAP_HAS_CACHE are implemented and used when checking
whether a swap slot can be reclaimed.
To make it easier to determine whether a swap slot is backing a THP, a
new swap cluster flag named CLUSTER_FLAG_HUGE is added to mark a swap
cluster which is backing a THP (Transparent Huge Page). Because THP
swap in as a whole isn't supported now. After deleting the THP from the
swap cache (for example, swapping out finished), the CLUSTER_FLAG_HUGE
flag will be cleared. So that, the normal pages inside THP can be
swapped in individually.
With your change, if we have a swapped out THP with 512 entries and exit(), we
would now 512 times in a row grab a folio reference and trylock the folio. In the
past, we would have done that at most once.
That doesn't feel quite right TBH ... so I'm wondering if there are any low-hanging
fruits to avoid that.
--
Cheers,
David / dhildenb