On 2024/1/30 09:34, Johannes Weiner wrote: > In the per-memcg LRU universe, LRU removal uses entry->objcg to > determine which list count needs to be decreased. Drop the objcg > reference after updating the LRU, to fix a possible use-after-free. > > Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware") > Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> LGTM, thanks! Reviewed-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx> > --- > mm/zswap.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/mm/zswap.c b/mm/zswap.c > index de68a5928527..7f88b3a77e4a 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -522,10 +522,6 @@ static struct zpool *zswap_find_zpool(struct zswap_entry *entry) > */ > static void zswap_free_entry(struct zswap_entry *entry) > { > - if (entry->objcg) { > - obj_cgroup_uncharge_zswap(entry->objcg, entry->length); > - obj_cgroup_put(entry->objcg); > - } > if (!entry->length) > atomic_dec(&zswap_same_filled_pages); > else { > @@ -534,6 +530,10 @@ static void zswap_free_entry(struct zswap_entry *entry) > atomic_dec(&entry->pool->nr_stored); > zswap_pool_put(entry->pool); > } > + if (entry->objcg) { > + obj_cgroup_uncharge_zswap(entry->objcg, entry->length); > + obj_cgroup_put(entry->objcg); > + } > zswap_entry_cache_free(entry); > atomic_dec(&zswap_stored_pages); > zswap_update_total_size();