On Mon, Jul 29, 2024 at 10:25 PM Takero Funaki <flintglass@xxxxxxxxx> wrote: > > 2024年7月30日(火) 3:24 Yosry Ahmed <yosryahmed@xxxxxxxxxx>: > > > > On Sat, Jul 27, 2024 at 4:06 PM Takero Funaki <flintglass@xxxxxxxxx> wrote: > > > > > > This patch fixes an issue where the zswap global shrinker stopped > > > iterating through the memcg tree. > > > > > > The problem was that shrink_worker() would restart iterating memcg tree > > > from the tree root, considering an offline memcg as a failure, and abort > > > shrinking after encountering the same offline memcg 16 times even if > > > there is only one offline memcg. After this change, an offline memcg in > > > the tree is no longer considered a failure. This allows the shrinker to > > > continue shrinking the other online memcgs regardless of whether an > > > offline memcg exists, gives higher zswap writeback activity. > > > > > > To avoid holding refcount of offline memcg encountered during the memcg > > > tree walking, shrink_worker() must continue iterating to release the > > > offline memcg to ensure the next memcg stored in the cursor is online. > > > > > > The offline memcg cleaner has also been changed to avoid the same issue. > > > When the next memcg of the offlined memcg is also offline, the refcount > > > stored in the iteration cursor was held until the next shrink_worker() > > > run. The cleaner must release the offline memcg recursively. > > > > > > Fixes: a65b0e7607cc ("zswap: make shrinking memcg-aware") > > > Signed-off-by: Takero Funaki <flintglass@xxxxxxxxx> > > > --- > > > mm/zswap.c | 73 ++++++++++++++++++++++++++++++++++++------------------ > > > 1 file changed, 49 insertions(+), 24 deletions(-) > > > > > > diff --git a/mm/zswap.c b/mm/zswap.c > > > index adeaf9c97fde..e9b5343256cd 100644 > > > --- a/mm/zswap.c > > > +++ b/mm/zswap.c > > > @@ -765,12 +765,31 @@ void zswap_folio_swapin(struct folio *folio) > > > } > > > } > > > > > > +/* > > > + * This function should be called when a memcg is being offlined. > > > + * > > > + * Since the global shrinker shrink_worker() may hold a reference > > > + * of the memcg, we must check and release the reference in > > > + * zswap_next_shrink. > > > + * > > > + * shrink_worker() must handle the case where this function releases > > > + * the reference of memcg being shrunk. > > > + */ > > > void zswap_memcg_offline_cleanup(struct mem_cgroup *memcg) > > > { > > > /* lock out zswap shrinker walking memcg tree */ > > > spin_lock(&zswap_shrink_lock); > > > - if (zswap_next_shrink == memcg) > > > - zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL); > > > + if (zswap_next_shrink == memcg) { > > > + do { > > > + zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL); > > > + } while (zswap_next_shrink && !mem_cgroup_online(zswap_next_shrink)); > > > + /* > > > + * We verified the next memcg is online. Even if the next > > > + * memcg is being offlined here, another cleaner must be > > > + * waiting for our lock. We can leave the online memcg > > > + * reference. > > > + */ > > > > I thought we agreed to drop this comment :) > > > > > + } > > > spin_unlock(&zswap_shrink_lock); > > > } > > > > > > @@ -1304,43 +1323,49 @@ static void shrink_worker(struct work_struct *w) > > > /* Reclaim down to the accept threshold */ > > > thr = zswap_accept_thr_pages(); > > > > > > - /* global reclaim will select cgroup in a round-robin fashion. */ > > > + /* global reclaim will select cgroup in a round-robin fashion. > > > > nit: s/global/Global > > > > > + * > > > + * We save iteration cursor memcg into zswap_next_shrink, > > > + * which can be modified by the offline memcg cleaner > > > + * zswap_memcg_offline_cleanup(). > > > + * > > > + * Since the offline cleaner is called only once, we cannot leave an > > > + * offline memcg reference in zswap_next_shrink. > > > + * We can rely on the cleaner only if we get online memcg under lock. > > > + * > > > + * If we get an offline memcg, we cannot determine if the cleaner has > > > + * already been called or will be called later. We must put back the > > > + * reference before returning from this function. Otherwise, the > > > + * offline memcg left in zswap_next_shrink will hold the reference > > > + * until the next run of shrink_worker(). > > > + */ > > > do { > > > spin_lock(&zswap_shrink_lock); > > > - zswap_next_shrink = mem_cgroup_iter(NULL, zswap_next_shrink, NULL); > > > - memcg = zswap_next_shrink; > > > > > > /* > > > - * We need to retry if we have gone through a full round trip, or if we > > > - * got an offline memcg (or else we risk undoing the effect of the > > > - * zswap memcg offlining cleanup callback). This is not catastrophic > > > - * per se, but it will keep the now offlined memcg hostage for a while. > > > - * > > > + * Start shrinking from the next memcg after zswap_next_shrink. > > > + * When the offline cleaner has already advanced the cursor, > > > + * advancing the cursor here overlooks one memcg, but this > > > + * should be negligibly rare. > > > + */ > > > + do { > > > + memcg = mem_cgroup_iter(NULL, zswap_next_shrink, NULL); > > > + zswap_next_shrink = memcg; > > > + } while (memcg && !mem_cgroup_tryget_online(memcg)); > > > > Let's move spin_lock() and spin_unlock() to be right above and before > > the do-while loop, similar to zswap_memcg_offline_cleanup(). This > > should make it more obvious what the lock is protecting. > > > > Actually, maybe it would be cleaner at this point to move the > > iteration to find the next online memcg under lock into a helper, and > > use it here and in zswap_memcg_offline_cleanup(). zswap_shrink_lock > > and zswap_next_shrink can be made static to this helper and maybe some > > of the comments could live there instead. Something like > > zswap_next_shrink_memcg(). > > > > This will abstract this whole iteration logic and make shrink_worker() > > significantly easier to follow. WDYT? > > > > I can do that in a followup cleanup patch if you prefer this as well. > > > > I'd really appreciate it. Sorry to have kept you waiting for a novice > coder. Thank you for all your comments and support. I will send a followup patch after this lands in mm-unstable. For this patch, feel free to add: Acked-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>