On Tue, 30 May 2023 21:02:51 +0000 Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote: > Commit 71024cb4a0bf ("frontswap: remove frontswap_tmem_exclusive_gets") > removed support for exclusive loads from frontswap as it was not used. > > Bring back exclusive loads support to frontswap by adding an > exclusive_loads argument to frontswap_ops. Add support for exclusive > loads to zswap behind CONFIG_ZSWAP_EXCLUSIVE_LOADS. Why is this Kconfigurable? Why not just enable the feature for all builds? > Refactor zswap entry invalidation in zswap_frontswap_invalidate_page() > into zswap_invalidate_entry() to reuse it in zswap_frontswap_load(). > > With exclusive loads, we avoid having two copies of the same page in > memory (compressed & uncompressed) after faulting it in from zswap. On > the other hand, if the page is to be reclaimed again without being > dirtied, it will be re-compressed. Compression is not usually slow, and > a page that was just faulted in is less likely to be reclaimed again > soon. > > --- a/mm/Kconfig > +++ b/mm/Kconfig > @@ -46,6 +46,19 @@ config ZSWAP_DEFAULT_ON > The selection made here can be overridden by using the kernel > command line 'zswap.enabled=' option. > > +config ZSWAP_EXCLUSIVE_LOADS > + bool "Invalidate zswap entries when pages are loaded" > + depends on ZSWAP > + help > + If selected, when a page is loaded from zswap, the zswap entry is > + invalidated at once, as opposed to leaving it in zswap until the > + swap entry is freed. > + > + This avoids having two copies of the same page in memory > + (compressed and uncompressed) after faulting in a page from zswap. > + The cost is that if the page was never dirtied and needs to be > + swapped out again, it will be re-compressed. So it's a speed-vs-space tradeoff? I'm not sure how users are to decide whether they want this. Did we help them as much as possible?