[PATCH v3] sched/core: Don't use dying mm as active_mm of kthreads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It was found that a dying mm_struct where the owning task has exited
can stay on as active_mm of kernel threads as long as no other user
tasks run on those CPUs that use it as active_mm. This prolongs the
life time of dying mm holding up some resources that cannot be freed
on a mostly idle system.

Fix that by forcing the kernel threads to use init_mm as the active_mm
during a kernel thread to kernel thread transition if the previous
active_mm is dying (!mm_users). This will allows the freeing of resources
associated with the dying mm ASAP.

The presence of a kernel-to-kernel thread transition indicates that
the cpu is probably idling with no higher priority user task to run.
So the overhead of loading the mm_users cacheline should not really
matter in this case.

My testing on an x86 system showed that the mm_struct was freed within
seconds after the task exited instead of staying alive for minutes or
even longer on a mostly idle system before this patch.

Signed-off-by: Waiman Long <longman@xxxxxxxxxx>
---
 kernel/sched/core.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 795077af4f1a..41997e676251 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3214,6 +3214,8 @@ static __always_inline struct rq *
 context_switch(struct rq *rq, struct task_struct *prev,
 	       struct task_struct *next, struct rq_flags *rf)
 {
+	struct mm_struct *next_mm = next->mm;
+
 	prepare_task_switch(rq, prev, next);
 
 	/*
@@ -3229,8 +3231,22 @@ context_switch(struct rq *rq, struct task_struct *prev,
 	 *
 	 * kernel ->   user   switch + mmdrop() active
 	 *   user ->   user   switch
+	 *
+	 * kernel -> kernel and !prev->active_mm->mm_users:
+	 *   switch to init_mm + mmgrab() + mmdrop()
 	 */
-	if (!next->mm) {                                // to kernel
+	if (!next_mm) {					// to kernel
+		/*
+		 * Checking is only done on kernel -> kernel transition
+		 * to avoid any performance overhead while user tasks
+		 * are running.
+		 */
+		if (unlikely(!prev->mm &&
+			     !atomic_read(&prev->active_mm->mm_users))) {
+			next_mm = next->active_mm = &init_mm;
+			mmgrab(next_mm);
+			goto mm_switch;
+		}
 		enter_lazy_tlb(prev->active_mm, next);
 
 		next->active_mm = prev->active_mm;
@@ -3239,6 +3255,7 @@ context_switch(struct rq *rq, struct task_struct *prev,
 		else
 			prev->active_mm = NULL;
 	} else {                                        // to user
+mm_switch:
 		/*
 		 * sys_membarrier() requires an smp_mb() between setting
 		 * rq->curr and returning to userspace.
@@ -3248,7 +3265,7 @@ context_switch(struct rq *rq, struct task_struct *prev,
 		 * finish_task_switch()'s mmdrop().
 		 */
 
-		switch_mm_irqs_off(prev->active_mm, next->mm, next);
+		switch_mm_irqs_off(prev->active_mm, next_mm, next);
 
 		if (!prev->mm) {                        // from kernel
 			/* will mmdrop() in finish_task_switch(). */
-- 
2.18.1




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux