[RFC PATCH 70/86] treewide: ipc: remove cond_resched()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There are broadly three sets of uses of cond_resched():

1.  Calls to cond_resched() out of the goodness of our heart,
    otherwise known as avoiding lockup splats.

2.  Open coded variants of cond_resched_lock() which call
    cond_resched().

3.  Retry or error handling loops, where cond_resched() is used as a
    quick alternative to spinning in a tight-loop.

When running under a full preemption model, the cond_resched() reduces
to a NOP (not even a barrier) so removing it obviously cannot matter.

But considering only voluntary preemption models (for say code that
has been mostly tested under those), for set-1 and set-2 the
scheduler can now preempt kernel tasks running beyond their time
quanta anywhere they are preemptible() [1]. Which removes any need
for these explicitly placed scheduling points.

The cond_resched() calls in set-3 are a little more difficult.
To start with, given it's NOP character under full preemption, it
never actually saved us from a tight loop.
With voluntary preemption, it's not a NOP, but it might as well be --
for most workloads the scheduler does not have an interminable supply
of runnable tasks on the runqueue.

So, cond_resched() is useful to not get softlockup splats, but not
terribly good for error handling. Ideally, these should be replaced
with some kind of timed or event wait.
For now we use cond_resched_stall(), which tries to schedule if
possible, and executes a cpu_relax() if not.

All calls to cond_resched() are from set-1, from potentially long
running loops. Remove them.

[1] https://lore.kernel.org/lkml/20231107215742.363031-1-ankur.a.arora@xxxxxxxxxx/

Cc: Davidlohr Bueso <dave@xxxxxxxxxxxx> 
Cc: Christophe JAILLET <christophe.jaillet@xxxxxxxxxx> 
Cc: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx> 
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> 
Cc: Jann Horn <jannh@xxxxxxxxxx> 
Signed-off-by: Ankur Arora <ankur.a.arora@xxxxxxxxxx>
---
 ipc/msgutil.c | 3 ---
 ipc/sem.c     | 2 --
 2 files changed, 5 deletions(-)

diff --git a/ipc/msgutil.c b/ipc/msgutil.c
index d0a0e877cadd..d9d1b7957bb6 100644
--- a/ipc/msgutil.c
+++ b/ipc/msgutil.c
@@ -62,8 +62,6 @@ static struct msg_msg *alloc_msg(size_t len)
 	while (len > 0) {
 		struct msg_msgseg *seg;
 
-		cond_resched();
-
 		alen = min(len, DATALEN_SEG);
 		seg = kmalloc(sizeof(*seg) + alen, GFP_KERNEL_ACCOUNT);
 		if (seg == NULL)
@@ -177,7 +175,6 @@ void free_msg(struct msg_msg *msg)
 	while (seg != NULL) {
 		struct msg_msgseg *tmp = seg->next;
 
-		cond_resched();
 		kfree(seg);
 		seg = tmp;
 	}
diff --git a/ipc/sem.c b/ipc/sem.c
index a39cdc7bf88f..e12ab01161f6 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -2350,8 +2350,6 @@ void exit_sem(struct task_struct *tsk)
 		int semid, i;
 		DEFINE_WAKE_Q(wake_q);
 
-		cond_resched();
-
 		rcu_read_lock();
 		un = list_entry_rcu(ulp->list_proc.next,
 				    struct sem_undo, list_proc);
-- 
2.31.1





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux