The patch titled Subject: mm: zswap: support exclusive loads has been added to the -mm mm-unstable branch. Its filename is mm-zswap-support-exclusive-loads.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-zswap-support-exclusive-loads.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Subject: mm: zswap: support exclusive loads Date: Tue, 30 May 2023 21:02:51 +0000 Commit 71024cb4a0bf ("frontswap: remove frontswap_tmem_exclusive_gets") removed support for exclusive loads from frontswap as it was not used. Bring back exclusive loads support to frontswap by adding an exclusive_loads argument to frontswap_ops. Add support for exclusive loads to zswap behind CONFIG_ZSWAP_EXCLUSIVE_LOADS. Refactor zswap entry invalidation in zswap_frontswap_invalidate_page() into zswap_invalidate_entry() to reuse it in zswap_frontswap_load(). With exclusive loads, we avoid having two copies of the same page in memory (compressed & uncompressed) after faulting it in from zswap. On the other hand, if the page is to be reclaimed again without being dirtied, it will be re-compressed. Compression is not usually slow, and a page that was just faulted in is less likely to be reclaimed again soon. Link: https://lkml.kernel.org/r/20230530210251.493194-1-yosryahmed@xxxxxxxxxx Signed-off-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Suggested-by: Yu Zhao <yuzhao@xxxxxxxxxx> Cc: Dan Streetman <ddstreet@xxxxxxxx> Cc: Domenico Cerasuolo <cerasuolodomenico@xxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> Cc: Nhat Pham <nphamcs@xxxxxxxxx> Cc: Seth Jennings <sjenning@xxxxxxxxxx> Cc: Vitaly Wool <vitaly.wool@xxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/frontswap.h | 1 + mm/Kconfig | 13 +++++++++++++ mm/frontswap.c | 7 ++++++- mm/zswap.c | 23 +++++++++++++++-------- 4 files changed, 35 insertions(+), 9 deletions(-) --- a/include/linux/frontswap.h~mm-zswap-support-exclusive-loads +++ a/include/linux/frontswap.h @@ -13,6 +13,7 @@ struct frontswap_ops { int (*load)(unsigned, pgoff_t, struct page *); /* load a page */ void (*invalidate_page)(unsigned, pgoff_t); /* page no longer needed */ void (*invalidate_area)(unsigned); /* swap type just swapoff'ed */ + bool exclusive_loads; /* pages are invalidated after being loaded */ }; int frontswap_register_ops(const struct frontswap_ops *ops); --- a/mm/frontswap.c~mm-zswap-support-exclusive-loads +++ a/mm/frontswap.c @@ -216,8 +216,13 @@ int __frontswap_load(struct page *page) /* Try loading from each implementation, until one succeeds. */ ret = frontswap_ops->load(type, offset, page); - if (ret == 0) + if (ret == 0) { inc_frontswap_loads(); + if (frontswap_ops->exclusive_loads) { + SetPageDirty(page); + __frontswap_clear(sis, offset); + } + } return ret; } --- a/mm/Kconfig~mm-zswap-support-exclusive-loads +++ a/mm/Kconfig @@ -46,6 +46,19 @@ config ZSWAP_DEFAULT_ON The selection made here can be overridden by using the kernel command line 'zswap.enabled=' option. +config ZSWAP_EXCLUSIVE_LOADS + bool "Invalidate zswap entries when pages are loaded" + depends on ZSWAP + help + If selected, when a page is loaded from zswap, the zswap entry is + invalidated at once, as opposed to leaving it in zswap until the + swap entry is freed. + + This avoids having two copies of the same page in memory + (compressed and uncompressed) after faulting in a page from zswap. + The cost is that if the page was never dirtied and needs to be + swapped out again, it will be re-compressed. + choice prompt "Default compressor" depends on ZSWAP --- a/mm/zswap.c~mm-zswap-support-exclusive-loads +++ a/mm/zswap.c @@ -1329,6 +1329,16 @@ shrink: goto reject; } +static void zswap_invalidate_entry(struct zswap_tree *tree, + struct zswap_entry *entry) +{ + /* remove from rbtree */ + zswap_rb_erase(&tree->rbroot, entry); + + /* drop the initial reference from entry creation */ + zswap_entry_put(tree, entry); +} + /* * returns 0 if the page was successfully decompressed * return -1 on entry not found or error @@ -1403,6 +1413,8 @@ stats: count_objcg_event(entry->objcg, ZSWPIN); freeentry: spin_lock(&tree->lock); + if (!ret && IS_ENABLED(CONFIG_ZSWAP_EXCLUSIVE_LOADS)) + zswap_invalidate_entry(tree, entry); zswap_entry_put(tree, entry); spin_unlock(&tree->lock); @@ -1423,13 +1435,7 @@ static void zswap_frontswap_invalidate_p spin_unlock(&tree->lock); return; } - - /* remove from rbtree */ - zswap_rb_erase(&tree->rbroot, entry); - - /* drop the initial reference from entry creation */ - zswap_entry_put(tree, entry); - + zswap_invalidate_entry(tree, entry); spin_unlock(&tree->lock); } @@ -1472,7 +1478,8 @@ static const struct frontswap_ops zswap_ .load = zswap_frontswap_load, .invalidate_page = zswap_frontswap_invalidate_page, .invalidate_area = zswap_frontswap_invalidate_area, - .init = zswap_frontswap_init + .init = zswap_frontswap_init, + .exclusive_loads = IS_ENABLED(CONFIG_ZSWAP_EXCLUSIVE_LOADS), }; /********************************* _ Patches currently in -mm which might be from yosryahmed@xxxxxxxxxx are memcg-use-seq_buf_do_printk-with-mem_cgroup_print_oom_meminfo.patch memcg-dump-memorystat-during-cgroup-oom-for-v1.patch writeback-move-wb_over_bg_thresh-call-outside-lock-section.patch memcg-flush-stats-non-atomically-in-mem_cgroup_wb_stats.patch memcg-calculate-root-usage-from-global-state.patch memcg-remove-mem_cgroup_flush_stats_atomic.patch cgroup-remove-cgroup_rstat_flush_atomic.patch mm-zswap-support-exclusive-loads.patch