On Fri, Sep 22, 2023 at 07:22:11PM +0200, Domenico Cerasuolo wrote: > While stress-testing zswap a memory corruption was happening when writing > back pages. __frontswap_store used to check for duplicate entries before > attempting to store a page in zswap, this was because if the store fails > the old entry isn't removed from the tree. This change removes duplicate > entries in zswap_store before the actual attempt. > > Based on commit ce9ecca0238b ("Linux 6.6-rc2") > > Fixes: 42c06a0e8ebe ("mm: kill frontswap") > Signed-off-by: Domenico Cerasuolo <cerasuolodomenico@xxxxxxxxx> Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx> > @@ -1218,6 +1218,19 @@ bool zswap_store(struct folio *folio) > if (!zswap_enabled || !tree) > return false; > > + /* > + * If this is a duplicate, it must be removed before attempting to store > + * it, otherwise, if the store fails the old page won't be removed from > + * the tree, and it might be written back overriding the new data. > + */ > + spin_lock(&tree->lock); > + dupentry = zswap_rb_search(&tree->rbroot, offset); > + if (dupentry) { > + zswap_duplicate_entry++; > + zswap_invalidate_entry(tree, dupentry); > + } > + spin_unlock(&tree->lock); Do we still need the dupe handling at the end of the function then? The dupe store happens because a page that's already in swapcache has changed and we're trying to swap_writepage() it again with new data. But the page is locked at this point, pinning the swap entry. So even after the tree lock is dropped I don't see how *another* store to the tree at this offset could occur while we're compressing.