On Tue, Feb 06, 2024 at 10:08:55AM -0800, Nhat Pham wrote: > When a folio is swapped in, the protection size of the corresponding > zswap LRU is incremented, so that the zswap shrinker is more > conservative with its reclaiming action. This field is embedded within > the struct lruvec, so updating it requires looking up the folio's memcg > and lruvec. However, currently this lookup can happen after the folio is > unlocked, for instance if a new folio is allocated, and > swap_read_folio() unlocks the folio before returning. In this scenario, > there is no stability guarantee for the binding between a folio and its > memcg and lruvec: > > * A folio's memcg and lruvec can be freed between the lookup and the > update, leading to a UAF. > * Folio migration can clear the now-unlocked folio's memcg_data, which > directs the zswap LRU protection size update towards the root memcg > instead of the original memcg. This was recently picked up by the > syzbot thanks to a warning in the inlined folio_lruvec() call. > > Move the zswap LRU protection range update above the swap_read_folio() > call, and only when a new page is allocated, to prevent this. > > Reported-by: syzbot+17a611d10af7d18a7092@xxxxxxxxxxxxxxxxxxxxxxxxx > Closes: https://lore.kernel.org/all/000000000000ae47f90610803260@xxxxxxxxxx/ > Fixes: b5ba474f3f51 ("zswap: shrink zswap pool based on memory pressure") > Signed-off-by: Nhat Pham <nphamcs@xxxxxxxxx> With the fixlet applied, Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>