[PATCH RFC 02/26] task_work: Replace spin_unlock_wait() with lock/unlock pair

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There is no agreed-upon definition of spin_unlock_wait()'s semantics,
and it appears that all callers could do just as well with a lock/unlock
pair.  This commit therefore replaces the spin_unlock_wait() call in
task_work_run() with spin_lock() followed immediately by spin_unlock().
This should be safe from a performance perspective because calls to the
other side of the race, task_work_cancel(), should be rare.

Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
Cc: Oleg Nesterov <oleg@xxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Will Deacon <will.deacon@xxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Alan Stern <stern@xxxxxxxxxxxxxxxxxxx>
Cc: Andrea Parri <parri.andrea@xxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
---
 kernel/task_work.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/task_work.c b/kernel/task_work.c
index d513051fcca2..b9b428832229 100644
--- a/kernel/task_work.c
+++ b/kernel/task_work.c
@@ -109,7 +109,8 @@ void task_work_run(void)
 		 * the first entry == work, cmpxchg(task_works) should
 		 * fail, but it can play with *work and other entries.
 		 */
-		raw_spin_unlock_wait(&task->pi_lock);
+		raw_spin_lock(&task->pi_lock);
+		raw_spin_unlock(&task->pi_lock);
 
 		do {
 			next = work->next;
-- 
2.5.2




[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux