[nacked] mm-memcg-disable-task-obj_stock-for-preempt_rt.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: mm/memcg: disable task obj_stock for PREEMPT_RT
has been removed from the -mm tree.  Its filename was
     mm-memcg-disable-task-obj_stock-for-preempt_rt.patch

This patch was dropped because it was nacked

------------------------------------------------------
From: Waiman Long <longman@xxxxxxxxxx>
Subject: mm/memcg: disable task obj_stock for PREEMPT_RT

For PREEMPT_RT kernel, preempt_disable() and local_irq_save() are
typically converted to local_lock() and local_lock_irqsave() respectively.
These two variants of local_lock() are essentially the same.  Thus, there
is no performance advantage in choosing one over the other.

As there is no point in maintaining two different sets of obj_stock, it is
simpler and more efficient to just disable task_obj and use only irq_obj
for PREEMPT_RT.  However, task_obj will still be there in the
memcg_stock_pcp structure even though it is not used in this
configuration.

Link: https://lkml.kernel.org/r/20210803175519.22298-1-longman@xxxxxxxxxx
Signed-off-by: Waiman Long <longman@xxxxxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxxxx>
Cc: Vladimir Davydov <vdavydov.dev@xxxxxxxxx>
Cc: Vlastimil Babka <vbabka@xxxxxxx>
Cc: Roman Gushchin <guro@xxxxxx>
Cc: Shakeel Butt <shakeelb@xxxxxxxxxx>
Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx>
Cc: Luis Goncalves <lgoncalv@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/memcontrol.c |   18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

--- a/mm/memcontrol.c~mm-memcg-disable-task-obj_stock-for-preempt_rt
+++ a/mm/memcontrol.c
@@ -2093,12 +2093,22 @@ static bool obj_stock_flush_required(str
  * which is cheap in non-preempt kernel. The interrupt context object stock
  * can only be accessed after disabling interrupt. User context code can
  * access interrupt object stock, but not vice versa.
+ *
+ * For PREEMPT_RT kernel, preempt_disable() and local_irq_save() may have
+ * to be changed to variants of local_lock(). This eliminates the
+ * performance advantage of using preempt_disable(). Fall back to always
+ * use local_irq_save() and use only irq_obj for simplicity.
  */
+static inline bool use_task_obj_stock(void)
+{
+	return !IS_ENABLED(CONFIG_PREEMPT_RT) && likely(in_task());
+}
+
 static inline struct obj_stock *get_obj_stock(unsigned long *pflags)
 {
 	struct memcg_stock_pcp *stock;
 
-	if (likely(in_task())) {
+	if (use_task_obj_stock()) {
 		*pflags = 0UL;
 		preempt_disable();
 		stock = this_cpu_ptr(&memcg_stock);
@@ -2112,7 +2122,7 @@ static inline struct obj_stock *get_obj_
 
 static inline void put_obj_stock(unsigned long flags)
 {
-	if (likely(in_task()))
+	if (use_task_obj_stock())
 		preempt_enable();
 	else
 		local_irq_restore(flags);
@@ -2185,7 +2195,7 @@ static void drain_local_stock(struct wor
 
 	stock = this_cpu_ptr(&memcg_stock);
 	drain_obj_stock(&stock->irq_obj);
-	if (in_task())
+	if (use_task_obj_stock())
 		drain_obj_stock(&stock->task_obj);
 	drain_stock(stock);
 	clear_bit(FLUSHING_CACHED_CHARGE, &stock->flags);
@@ -3163,7 +3173,7 @@ static bool obj_stock_flush_required(str
 {
 	struct mem_cgroup *memcg;
 
-	if (in_task() && stock->task_obj.cached_objcg) {
+	if (use_task_obj_stock() && stock->task_obj.cached_objcg) {
 		memcg = obj_cgroup_memcg(stock->task_obj.cached_objcg);
 		if (memcg && mem_cgroup_is_descendant(memcg, root_memcg))
 			return true;
_

Patches currently in -mm which might be from longman@xxxxxxxxxx are

mm-memcg-fix-incorrect-flushing-of-lruvec-data-in-obj_stock.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux