Re: [PATCH] blk-cgroup: Flush stats before releasing blkcg_gq

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/23/23 22:06, Yosry Ahmed wrote:
Hi Ming,

On Tue, May 23, 2023 at 6:21 PM Ming Lei <ming.lei@xxxxxxxxxx> wrote:
As noted by Michal, the blkg_iostat_set's in the lockless list
hold reference to blkg's to protect against their removal. Those
blkg's hold reference to blkcg. When a cgroup is being destroyed,
cgroup_rstat_flush() is only called at css_release_work_fn() which
is called when the blkcg reference count reaches 0. This circular
dependency will prevent blkcg and some blkgs from being freed after
they are made offline.
I am not at all familiar with blkcg, but does calling
cgroup_rstat_flush() in offline_css() fix the problem? or can items be
added to the lockless list(s) after the blkcg is offlined?

It is less a problem if the cgroup to be destroyed also has other
controllers like memory that will call cgroup_rstat_flush() which will
clean up the reference count. If block is the only controller that uses
rstat, these offline blkcg and blkgs may never be freed leaking more
and more memory over time.

To prevent this potential memory leak:

- a new cgroup_rstat_css_cpu_flush() function is added to flush stats for
a given css and cpu. This new function will be called in __blkg_release().

- don't grab bio->bi_blkg when adding the stats into blkcg's per-cpu
stat list, and this kind of handling is the most fragile part of
original patch

Based on Waiman's patch:

https://lore.kernel.org/linux-block/20221215033132.230023-3-longman@xxxxxxxxxx/

Fixes: 3b8cc6298724 ("blk-cgroup: Optimize blkcg_rstat_flush()")
Cc: Waiman Long <longman@xxxxxxxxxx>
Cc: cgroups@xxxxxxxxxxxxxxx
Cc: Tejun Heo <tj@xxxxxxxxxx>
Cc: mkoutny@xxxxxxxx
Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx>
---
  block/blk-cgroup.c     | 15 +++++++++++++--
  include/linux/cgroup.h |  1 +
  kernel/cgroup/rstat.c  | 18 ++++++++++++++++++
  3 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 0ce64dd73cfe..5437b6af3955 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -163,10 +163,23 @@ static void blkg_free(struct blkcg_gq *blkg)
  static void __blkg_release(struct rcu_head *rcu)
  {
         struct blkcg_gq *blkg = container_of(rcu, struct blkcg_gq, rcu_head);
+       struct blkcg *blkcg = blkg->blkcg;
+       int cpu;

  #ifdef CONFIG_BLK_CGROUP_PUNT_BIO
         WARN_ON(!bio_list_empty(&blkg->async_bios));
  #endif
+       /*
+        * Flush all the non-empty percpu lockless lists before releasing
+        * us. Meantime no new bio can refer to this blkg any more given
+        * the refcnt is killed.
+        */
+       for_each_possible_cpu(cpu) {
+               struct llist_head *lhead = per_cpu_ptr(blkcg->lhead, cpu);
+
+               if (!llist_empty(lhead))
+                       cgroup_rstat_css_cpu_flush(&blkcg->css, cpu);
+       }

         /* release the blkcg and parent blkg refs this blkg has been holding */
         css_put(&blkg->blkcg->css);
@@ -991,7 +1004,6 @@ static void blkcg_rstat_flush(struct cgroup_subsys_state *css, int cpu)
                 if (parent && parent->parent)
                         blkcg_iostat_update(parent, &blkg->iostat.cur,
                                             &blkg->iostat.last);
-               percpu_ref_put(&blkg->refcnt);
         }

  out:
@@ -2075,7 +2087,6 @@ void blk_cgroup_bio_start(struct bio *bio)

                 llist_add(&bis->lnode, lhead);
                 WRITE_ONCE(bis->lqueued, true);
-               percpu_ref_get(&bis->blkg->refcnt);
         }

         u64_stats_update_end_irqrestore(&bis->sync, flags);
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 885f5395fcd0..97d4764d8e6a 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -695,6 +695,7 @@ void cgroup_rstat_flush(struct cgroup *cgrp);
  void cgroup_rstat_flush_atomic(struct cgroup *cgrp);
  void cgroup_rstat_flush_hold(struct cgroup *cgrp);
  void cgroup_rstat_flush_release(void);
+void cgroup_rstat_css_cpu_flush(struct cgroup_subsys_state *css, int cpu);

  /*
   * Basic resource stats.
diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
index 9c4c55228567..96e7a4e6da72 100644
--- a/kernel/cgroup/rstat.c
+++ b/kernel/cgroup/rstat.c
@@ -281,6 +281,24 @@ void cgroup_rstat_flush_release(void)
         spin_unlock_irq(&cgroup_rstat_lock);
  }

+/**
+ * cgroup_rstat_css_cpu_flush - flush stats for the given css and cpu
+ * @css: target css to be flush
+ * @cpu: the cpu that holds the stats to be flush
+ *
+ * A lightweight rstat flush operation for a given css and cpu.
+ * Only the cpu_lock is being held for mutual exclusion, the cgroup_rstat_lock
+ * isn't used.
(Adding linux-mm and memcg maintainers)
+Linux-MM +Michal Hocko +Shakeel Butt +Johannes Weiner +Roman Gushchin
+Muchun Song

I don't think flushing the stats without holding cgroup_rstat_lock is
safe for memcg stats flushing. mem_cgroup_css_rstat_flush() modifies
some non-percpu data (e.g. memcg->vmstats->state,
memcg->vmstats->state_pending).

Perhaps have this be a separate callback than css_rstat_flush() (e.g.
css_rstat_flush_cpu() or something), so that it's clear what
subsystems support this? In this case, only blkcg would implement this
callback.

That function is added to call blkcg_rstat_flush() only which flush the stat in the blkcg and it should be safe. I agree that we should note that in the comment to list the preconditions for calling it.

Cheers,
Longman




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux