[PATCH] cgroup: rstat: optimize flush through speculative test

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Currently cgroup_rstat_updated() has a speculative already-on-list test
to check if the given cgroup is already part of the rstat update tree.
This helps in reducing the contention on the rstat cpu lock. This patch
adds the similar speculative not-on-list test on the rstat flush
codepath.

Recently the commit aa48e47e3906 ("memcg: infrastructure to flush memcg
stats") added periodic rstat flush. On a large system which is not much
busy, most of the per-cpu rstat tree would be empty. So, the speculative
not-on-list test helps in eliminating unnecessary work and potentially
reducing contention on the rstat cpu lock. Please note this might
introduce temporary inaccuracy but with the frequent and periodic flush
this would not be an issue.

To evaluate the impact of this patch, an 8 GiB tmpfs file is created on
a system with swap-on-zram and the file was pushed to swap through
memory.force_empty interface. On reading the whole file, the memcg stat
flush in the refault code path is triggered. With this patch, we
observed 38% reduction in the read time of 8 GiB file.

Signed-off-by: Shakeel Butt <shakeelb@xxxxxxxxxx>
---
 kernel/cgroup/rstat.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
index b264ab5652ba..748494fbc786 100644
--- a/kernel/cgroup/rstat.c
+++ b/kernel/cgroup/rstat.c
@@ -35,7 +35,7 @@ void cgroup_rstat_updated(struct cgroup *cgrp, int cpu)
 	 * instead of NULL, we can tell whether @cgrp is on the list by
 	 * testing the next pointer for NULL.
 	 */
-	if (cgroup_rstat_cpu(cgrp, cpu)->updated_next)
+	if (data_race(cgroup_rstat_cpu(cgrp, cpu)->updated_next))
 		return;
 
 	raw_spin_lock_irqsave(cpu_lock, flags);
@@ -157,6 +157,13 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp, bool may_sleep)
 						       cpu);
 		struct cgroup *pos = NULL;
 
+		/*
+		 * Speculative not-on-list test. This may lead to temporary
+		 * inaccuracies which is fine.
+		 */
+		if (!data_race(cgroup_rstat_cpu(cgrp, cpu)->updated_next))
+			goto next;
+
 		raw_spin_lock(cpu_lock);
 		while ((pos = cgroup_rstat_cpu_pop_updated(pos, cgrp, cpu))) {
 			struct cgroup_subsys_state *css;
@@ -170,7 +177,7 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp, bool may_sleep)
 			rcu_read_unlock();
 		}
 		raw_spin_unlock(cpu_lock);
-
+next:
 		/* if @may_sleep, play nice and yield if necessary */
 		if (may_sleep && (need_resched() ||
 				  spin_needbreak(&cgroup_rstat_lock))) {
-- 
2.33.0.685.g46640cef36-goog




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux