On Fri, Jan 10, 2025 at 11:09 PM Barry Song <21cnbao@xxxxxxxxx> wrote: > > Hi Usama, > > Please include me in the discussion. I'll try to attend, at least remotely. > > On Fri, Jan 10, 2025 at 9:06 AM Usama Arif <usamaarif642@xxxxxxxxx> wrote: > > > > I would like to propose a session to discuss the work going on > > around large folio swapin, whether its traditional swap or > > zswap or zram. > > > > Large folios have obvious advantages that have been discussed before > > like fewer page faults, batched PTE and rmap manipulation, reduced > > lru list, TLB coalescing (for arm64 and amd). > > However, swapping in large folios has its own drawbacks like higher > > swap thrashing. > > I had initially sent a RFC of zswapin of large folios in [1] > > but it causes a regression due to swap thrashing in kernel > > build time, which I am confident is happening with zram large > > folio swapin as well (which is merged in kernel). > > > > Some of the points we could discuss in the session: > > > > - What is the right (preferably open source) benchmark to test for > > swapin of large folios? kernel build time in limited > > memory cgroup shows a regression, microbenchmarks show a massive > > improvement, maybe there are benchmarks where TLB misses is > > a big factor and show an improvement. > > My understanding is that it largely depends on the workload. In interactive > scenarios, such as on a phone, swap thrashing is not an issue because > there is minimal to no thrashing for the app occupying the screen > (foreground). In such cases, swap bandwidth becomes the most critical factor > in improving app switching speed, especially when multiple applications > are switching between background and foreground states. > > > > > - We could have something like > > /sys/kernel/mm/transparent_hugepage/hugepages-*kB/swapin_enabled > > to enable/disable swapin but its going to be difficult to tune, might > > have different optimum values based on workloads and are likely to be > > left at their default values. Is there some dynamic way to decide when > > to swapin large folios and when to fallback to smaller folios? > > swapin_readahead swapcache path which only supports 4K folios atm has a > > read ahead window based on hits, however readahead is a folio flag and > > not a page flag, so this method can't be used as once a large folio > > is swapped in, we won't get a fault and subsequent hits on other > > pages of the large folio won't be recorded. > > > > - For zswap and zram, it might be that doing larger block compression/ > > decompression might offset the regression from swap thrashing, but it > > brings about its own issues. For e.g. once a large folio is swapped > > out, it could fail to swapin as a large folio and fallback > > to 4K, resulting in redundant decompressions. > > That's correct. My current workaround involves swapping four small folios, > and zsmalloc will compress and decompress in chunks of four pages, > regardless of the actual size of the mTHP - The improvement in compression > ratio and speed becomes less significant after exceeding four pages, even > though there is still some increase. > > Our recent experiments on phone also show that enabling direct reclamation > for do_swap_page() to allocate 2-order mTHP results in a 0% allocation > failure rate - this probably removes the need for fallbacking to 4 small > folios. (Note that our experiments include Yu's TAO—Android GKI has > already merged it. However, since 2 is less than > PAGE_ALLOC_COSTLY_ORDER, we might achieve similar results even > without Yu's TAO, although I have not confirmed this.) > > > This will also mean swapin of large folios from traditional swap > > isn't something we should proceed with? > > > > - Should we even support large folio swapin? You often have high swap > > activity when the system/cgroup is close to running out of memory, at this > > point, maybe the best way forward is to just swapin 4K pages and let > > khugepaged [2], [3] collapse them if the surrounding pages are swapped in > > as well. > > This approach might be suitable for non-interactive scenarios, such as building > a kernel within a memory control group (memcg) or running other server > applications. However, performing collapse in interactive and power-sensitive > scenarios would be unnecessary and could lead to wasted power due to > memory migration and unmap/map operations. > > However, it is quite challenging to automatically determine the type > of workloads > the system is running. I feel we still need a global control to decide whether > to enable mTHP swap-in—not necessarily per size, but at least at a global level. > That said, there is evident resistance to introducing additional > controls to enable > or disable mTHP features. I drafted an approach that eliminates the need for this control. Based on my testing, it results in even less swap thrashing compared to disabling mTHP swap-in for the non-mglru case. Here are the results: real 6m27.227s user 49m46.751s sys 3m34.512s pswpin: 294050 pswpout: 1265556 64kB-swpout: 0 32kB-swpout: 0 16kB-swpout: 288163 64kB-swpin: 0 32kB-swpin: 0 16kB-swpin: 22899 pgpgin: 11816316 pgpgout: 13891256 swpout_zero: 136907 swpin_zero: 77215 The draft is as below, [PATCH RFC] mm: throttle large folios swap-in based on thrashing We have two types of workloads. The first is interactive systems, where the foreground desktop apps typically do not swap out. In this case, we are more concerned with swap bandwidth for switching background and foreground apps, which is primarily driven by large folio swap-ins. The second type involves scenarios like building a kernel in a 1GB memory cgroup, where extensive swapping occurs. Large folio swap-ins can exacerbate swap thrashing in such cases. While conceptually, we could use a sysfs control to toggle the mTHP swap-in feature, there is resistance to adding new controls. Instead, we employ a simple automatic mechanism to roughly detect swap thrashing: if refaults are observed in a recent batch of swap-ins, we fall back to small folio swap-ins. Even during a kernel build in a 1GiB memory cgroup, we continue to observe many large folio swap-ins, benefiting from increased swap-in bandwidth, while increased swap thrashing has been eliminated compared to disabling mTHP swap-in. Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx> --- include/linux/mmzone.h | 9 +++++++++ mm/memcontrol.c | 19 +++++++++++++++++-- mm/workingset.c | 37 +++++++++++++++++++++++++++++++++++-- 3 files changed, 61 insertions(+), 4 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 9540b41894da..c6deece243d1 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -329,6 +329,15 @@ enum lruvec_flags { LRUVEC_NODE_CONGESTED, }; +/* + * Has the lruvec experienced an anon large folio refault recently? + * Once a refault occurs, we set it to 31; it only degrades to 0 if + * there are more than 31 consecutive non-refault swap-ins. + */ +#define LRUVEC_REFAULT_WIDTH 5 +#define LRUVEC_REFAULT_OFFS (LRUVEC_NODE_CONGESTED + 1) +#define LRUVEC_REFAULT_MASK ((BIT(LRUVEC_REFAULT_WIDTH) - 1) << LRUVEC_REFAULT_OFFS) + #endif /* !__GENERATING_BOUNDS_H */ /* diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 46f8b372d212..4155c4126a80 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4556,12 +4556,21 @@ int mem_cgroup_charge_hugetlb(struct folio *folio, gfp_t gfp) int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry) { + struct pglist_data *pgdat = folio_pgdat(folio); + struct lruvec *lruvec; struct mem_cgroup *memcg; unsigned short id; int ret; - if (mem_cgroup_disabled()) - return 0; + if (mem_cgroup_disabled()) { + /* + * lruvec is congested or has recent THP refaults, + * avoid future swap thrashing + */ + lruvec = &pgdat->__lruvec; + return (folio_test_large(folio) && lruvec->flags) ? + -ENOMEM : 0; + } id = lookup_swap_cgroup_id(entry); rcu_read_lock(); @@ -4570,8 +4579,14 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, memcg = get_mem_cgroup_from_mm(mm); rcu_read_unlock(); + lruvec = mem_cgroup_lruvec(memcg, folio_pgdat(folio)); + if (folio_test_large(folio) && lruvec->flags) { + ret = -ENOMEM; + goto out; + } ret = charge_memcg(folio, memcg, gfp); +out: css_put(&memcg->css); return ret; } diff --git a/mm/workingset.c b/mm/workingset.c index 4841ae8af411..095f8668dc22 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -280,6 +280,28 @@ static bool lru_gen_test_recent(void *shadow, struct lruvec **lruvec, return abs_diff(max_seq, *token >> LRU_REFS_WIDTH) < MAX_NR_GENS; } +static void lruvec_set_max_refaults(struct lruvec *lruvec) +{ + set_mask_bits(&lruvec->flags, LRUVEC_REFAULT_MASK, LRUVEC_REFAULT_MASK); +} + +static int lruvec_dec_refaults(struct lruvec *lruvec) +{ + unsigned long new_flags, old_flags = READ_ONCE(lruvec->flags); + unsigned long new_ref, old_ref; + + do { + old_ref = (old_flags & LRUVEC_REFAULT_MASK) >> LRUVEC_REFAULT_OFFS; + if (old_ref == 0) + return 0; + new_ref = old_ref - 1; + new_flags = old_flags & ~LRUVEC_REFAULT_MASK; + new_flags |= new_ref << LRUVEC_REFAULT_OFFS; + } while (!try_cmpxchg(&lruvec->flags, &old_flags, new_flags)); + + return old_ref; +} + static void lru_gen_refault(struct folio *folio, void *shadow) { bool recent; @@ -299,8 +321,14 @@ static void lru_gen_refault(struct folio *folio, void *shadow) mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + type, delta); - if (!recent) + if (!recent) { + if (!type) + lruvec_dec_refaults(lruvec); goto unlock; + } + + if (!type && folio_test_large(folio)) + lruvec_set_max_refaults(lruvec); lrugen = &lruvec->lrugen; @@ -563,11 +591,16 @@ void workingset_refault(struct folio *folio, void *shadow) mod_lruvec_state(lruvec, WORKINGSET_REFAULT_BASE + file, nr); - if (!workingset_test_recent(shadow, file, &workingset, true)) + if (!workingset_test_recent(shadow, file, &workingset, true)) { + if (!file) + lruvec_dec_refaults(lruvec); return; + } folio_set_active(folio); workingset_age_nonresident(lruvec, nr); + if (!file && folio_test_large(folio)) + lruvec_set_max_refaults(lruvec); mod_lruvec_state(lruvec, WORKINGSET_ACTIVATE_BASE + file, nr); /* Folio was active prior to eviction */ -- 2.34.1 > > By the way, Usama, have you ever tried switching between mglru and the > traditional > active/inactive LRU? My experience shows a significant difference in > swap thrashing > —active/inactive LRU exhibits much less swap thrashing in my local kernel build > tests. > > the latest mm-unstable > > *********** default mglru: *********** > > root@barry-desktop:/home/barry/develop/linux# ./build.sh > *** Executing round 1 *** > real 6m44.561s > user 46m53.274s > sys 3m48.585s > pswpin: 1286081 > pswpout: 3147936 > 64kB-swpout: 0 > 32kB-swpout: 0 > 16kB-swpout: 714580 > 64kB-swpin: 0 > 32kB-swpin: 0 > 16kB-swpin: 286881 > pgpgin: 17199072 > pgpgout: 21493892 > swpout_zero: 229163 > swpin_zero: 84353 > > ******** disable mglru ******** > > root@barry-desktop:/home/barry/develop/linux# echo 0 > > /sys/kernel/mm/lru_gen/enabled > > root@barry-desktop:/home/barry/develop/linux# ./build.sh > *** Executing round 1 *** > real 6m27.944s > user 46m41.832s > sys 3m30.635s > pswpin: 474036 > pswpout: 1434853 > 64kB-swpout: 0 > 32kB-swpout: 0 > 16kB-swpout: 331755 > 64kB-swpin: 0 > 32kB-swpin: 0 > 16kB-swpin: 106333 > pgpgin: 11763720 > pgpgout: 14551524 > swpout_zero: 145050 > swpin_zero: 87981 > > my build script: > > #!/bin/bash > echo never > /sys/kernel/mm/transparent_hugepage/hugepages-64kB/enabled > echo never > /sys/kernel/mm/transparent_hugepage/hugepages-32kB/enabled > echo always > /sys/kernel/mm/transparent_hugepage/hugepages-16kB/enabled > echo never > /sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled > > vmstat_path="/proc/vmstat" > thp_base_path="/sys/kernel/mm/transparent_hugepage" > > read_values() { > pswpin=$(grep "pswpin" $vmstat_path | awk '{print $2}') > pswpout=$(grep "pswpout" $vmstat_path | awk '{print $2}') > pgpgin=$(grep "pgpgin" $vmstat_path | awk '{print $2}') > pgpgout=$(grep "pgpgout" $vmstat_path | awk '{print $2}') > swpout_zero=$(grep "swpout_zero" $vmstat_path | awk '{print $2}') > swpin_zero=$(grep "swpin_zero" $vmstat_path | awk '{print $2}') > swpout_64k=$(cat $thp_base_path/hugepages-64kB/stats/swpout > 2>/dev/null || echo 0) > swpout_32k=$(cat $thp_base_path/hugepages-32kB/stats/swpout > 2>/dev/null || echo 0) > swpout_16k=$(cat $thp_base_path/hugepages-16kB/stats/swpout > 2>/dev/null || echo 0) > swpin_64k=$(cat $thp_base_path/hugepages-64kB/stats/swpin > 2>/dev/null || echo 0) > swpin_32k=$(cat $thp_base_path/hugepages-32kB/stats/swpin > 2>/dev/null || echo 0) > swpin_16k=$(cat $thp_base_path/hugepages-16kB/stats/swpin > 2>/dev/null || echo 0) > echo "$pswpin $pswpout $swpout_64k $swpout_32k $swpout_16k > $swpin_64k $swpin_32k $swpin_16k $pgpgin $pgpgout $swpout_zero > $swpin_zero" > } > > for ((i=1; i<=1; i++)) > do > echo > echo "*** Executing round $i ***" > make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- clean 1>/dev/null 2>/dev/null > echo 3 > /proc/sys/vm/drop_caches > > #kernel build > initial_values=($(read_values)) > time systemd-run --scope -p MemoryMax=1G make ARCH=arm64 \ > CROSS_COMPILE=aarch64-linux-gnu- vmlinux -j10 1>/dev/null 2>/dev/null > final_values=($(read_values)) > > echo "pswpin: $((final_values[0] - initial_values[0]))" > echo "pswpout: $((final_values[1] - initial_values[1]))" > echo "64kB-swpout: $((final_values[2] - initial_values[2]))" > echo "32kB-swpout: $((final_values[3] - initial_values[3]))" > echo "16kB-swpout: $((final_values[4] - initial_values[4]))" > echo "64kB-swpin: $((final_values[5] - initial_values[5]))" > echo "32kB-swpin: $((final_values[6] - initial_values[6]))" > echo "16kB-swpin: $((final_values[7] - initial_values[7]))" > echo "pgpgin: $((final_values[8] - initial_values[8]))" > echo "pgpgout: $((final_values[9] - initial_values[9]))" > echo "swpout_zero: $((final_values[10] - initial_values[10]))" > echo "swpin_zero: $((final_values[11] - initial_values[11]))" > sync > sleep 10 > done > > > > > [1] https://lore.kernel.org/all/20241018105026.2521366-1-usamaarif642@xxxxxxxxx/ > > [2] https://lore.kernel.org/all/20250108233128.14484-1-npache@xxxxxxxxxx/ > > [3] https://lore.kernel.org/lkml/20241216165105.56185-1-dev.jain@xxxxxxx/ > > > > Thanks, > > Usama > > Thanks > Barry