+ directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled

     Directed yield: cpu_relax variants for spinlocks and rw-locks

has been added to the -mm tree.  Its filename is

     directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks.patch

See http://www.zip.com.au/~akpm/linux/patches/stuff/added-to-mm.txt to find
out what to do about this

------------------------------------------------------
Subject: Directed yield: cpu_relax variants for spinlocks and rw-locks
From: Martin Schwidefsky <schwidefsky@xxxxxxxxxx>

On systems running with virtual cpus there is optimization potential in
regard to spinlocks and rw-locks.  If the virtual cpu that has taken a lock
is known to a cpu that wants to acquire the same lock it is beneficial to
yield the timeslice of the virtual cpu in favour of the cpu that has the
lock (directed yield).

With CONFIG_PREEMPT="n" this can be implemented by the architecture without
common code changes.  Powerpc already does this.

With CONFIG_PREEMPT="y" the lock loops are coded with _raw_spin_trylock,
_raw_read_trylock and _raw_write_trylock in kernel/spinlock.c.  If the lock
could not be taken cpu_relax is called.  A directed yield is not possible
because cpu_relax doesn't know anything about the lock.  To be able to
yield the lock in favour of the current lock holder variants of cpu_relax
for spinlocks and rw-locks are needed.  The new _raw_spin_relax,
_raw_read_relax and _raw_write_relax primitives differ from cpu_relax
insofar that they have an argument: a pointer to the lock structure.

Signed-off-by: Martin Schwidefsky <schwidefsky@xxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxx>
Cc: Paul Mackerras <paulus@xxxxxxxxx>
Cc: Haavard Skinnemoen <hskinnemoen@xxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxx>
---

 include/asm-alpha/spinlock.h         |    4 ++++
 include/asm-arm/spinlock.h           |    4 ++++
 include/asm-cris/arch-v32/spinlock.h |    4 ++++
 include/asm-i386/spinlock.h          |    4 ++++
 include/asm-ia64/spinlock.h          |    4 ++++
 include/asm-m32r/spinlock.h          |    4 ++++
 include/asm-mips/spinlock.h          |    4 ++++
 include/asm-parisc/spinlock.h        |    4 ++++
 include/asm-powerpc/spinlock.h       |    4 ++++
 include/asm-ppc/spinlock.h           |    4 ++++
 include/asm-s390/spinlock.h          |    4 ++++
 include/asm-sh/spinlock.h            |    4 ++++
 include/asm-sparc/spinlock.h         |    4 ++++
 include/asm-sparc64/spinlock.h       |    4 ++++
 include/asm-x86_64/spinlock.h        |    4 ++++
 kernel/spinlock.c                    |    4 ++--
 16 files changed, 62 insertions(+), 2 deletions(-)

diff -puN include/asm-alpha/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-alpha/spinlock.h
--- a/include/asm-alpha/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-alpha/spinlock.h
@@ -166,4 +166,8 @@ static inline void __raw_write_unlock(ra
 	lock->lock = 0;
 }
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif /* _ALPHA_SPINLOCK_H */
diff -puN include/asm-arm/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-arm/spinlock.h
--- a/include/asm-arm/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-arm/spinlock.h
@@ -218,4 +218,8 @@ static inline int __raw_read_trylock(raw
 /* read_can_lock - would read_trylock() succeed? */
 #define __raw_read_can_lock(x)		((x)->lock < 0x80000000)
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif /* __ASM_SPINLOCK_H */
diff -puN include/asm-cris/arch-v32/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-cris/arch-v32/spinlock.h
--- a/include/asm-cris/arch-v32/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-cris/arch-v32/spinlock.h
@@ -160,4 +160,8 @@ static __inline__ int is_write_locked(rw
 	return rw->counter < 0;
 }
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif /* __ASM_ARCH_SPINLOCK_H */
diff -puN include/asm-i386/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-i386/spinlock.h
--- a/include/asm-i386/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-i386/spinlock.h
@@ -204,4 +204,8 @@ static inline void __raw_write_unlock(ra
 				 : "+m" (rw->lock) : : "memory");
 }
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif /* __ASM_SPINLOCK_H */
diff -puN include/asm-ia64/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-ia64/spinlock.h
--- a/include/asm-ia64/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-ia64/spinlock.h
@@ -213,4 +213,8 @@ static inline int __raw_read_trylock(raw
 	return (u32)ia64_cmpxchg4_acq((__u32 *)(x), new.word, old.word) == old.word;
 }
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif /*  _ASM_IA64_SPINLOCK_H */
diff -puN include/asm-m32r/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-m32r/spinlock.h
--- a/include/asm-m32r/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-m32r/spinlock.h
@@ -309,4 +309,8 @@ static inline int __raw_write_trylock(ra
 	return 0;
 }
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif	/* _ASM_M32R_SPINLOCK_H */
diff -puN include/asm-mips/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-mips/spinlock.h
--- a/include/asm-mips/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-mips/spinlock.h
@@ -283,4 +283,8 @@ static inline int __raw_write_trylock(ra
 	return ret;
 }
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif /* _ASM_SPINLOCK_H */
diff -puN include/asm-parisc/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-parisc/spinlock.h
--- a/include/asm-parisc/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-parisc/spinlock.h
@@ -187,4 +187,8 @@ static __inline__ int __raw_write_can_lo
 	return !rw->counter;
 }
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif /* __ASM_SPINLOCK_H */
diff -puN include/asm-powerpc/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-powerpc/spinlock.h
--- a/include/asm-powerpc/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-powerpc/spinlock.h
@@ -285,5 +285,9 @@ static __inline__ void __raw_write_unloc
 	rw->lock = 0;
 }
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif /* __KERNEL__ */
 #endif /* __ASM_SPINLOCK_H */
diff -puN include/asm-ppc/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-ppc/spinlock.h
--- a/include/asm-ppc/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-ppc/spinlock.h
@@ -161,4 +161,8 @@ static __inline__ void __raw_write_unloc
 	rw->lock = 0;
 }
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif /* __ASM_SPINLOCK_H */
diff -puN include/asm-s390/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-s390/spinlock.h
--- a/include/asm-s390/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-s390/spinlock.h
@@ -135,4 +135,8 @@ static inline int __raw_write_trylock(ra
 	return _raw_write_trylock_retry(rw);
 }
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif /* __ASM_SPINLOCK_H */
diff -puN include/asm-sh/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-sh/spinlock.h
--- a/include/asm-sh/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-sh/spinlock.h
@@ -100,4 +100,8 @@ static inline int __raw_write_trylock(ra
 	return 0;
 }
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif /* __ASM_SH_SPINLOCK_H */
diff -puN include/asm-sparc/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-sparc/spinlock.h
--- a/include/asm-sparc/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-sparc/spinlock.h
@@ -154,6 +154,10 @@ static inline int __raw_write_trylock(ra
 #define __raw_spin_lock_flags(lock, flags) __raw_spin_lock(lock)
 #define __raw_read_trylock(lock) generic__raw_read_trylock(lock)
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #define __raw_read_can_lock(rw) (!((rw)->lock & 0xff))
 #define __raw_write_can_lock(rw) (!(rw)->lock)
 
diff -puN include/asm-sparc64/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-sparc64/spinlock.h
--- a/include/asm-sparc64/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-sparc64/spinlock.h
@@ -241,6 +241,10 @@ static int inline __write_trylock(raw_rw
 #define __raw_read_can_lock(rw)		(!((rw)->lock & 0x80000000UL))
 #define __raw_write_can_lock(rw)	(!(rw)->lock)
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif /* !(__ASSEMBLY__) */
 
 #endif /* !(__SPARC64_SPINLOCK_H) */
diff -puN include/asm-x86_64/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks include/asm-x86_64/spinlock.h
--- a/include/asm-x86_64/spinlock.h~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/include/asm-x86_64/spinlock.h
@@ -133,4 +133,8 @@ static inline void __raw_write_unlock(ra
 				: "=m" (rw->lock) : : "memory");
 }
 
+#define _raw_spin_relax(lock)	cpu_relax()
+#define _raw_read_relax(lock)	cpu_relax()
+#define _raw_write_relax(lock)	cpu_relax()
+
 #endif /* __ASM_SPINLOCK_H */
diff -puN kernel/spinlock.c~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks kernel/spinlock.c
--- a/kernel/spinlock.c~directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks
+++ a/kernel/spinlock.c
@@ -226,7 +226,7 @@ void __lockfunc _##op##_lock(locktype##_
 		if (!(lock)->break_lock)				\
 			(lock)->break_lock = 1;				\
 		while (!op##_can_lock(lock) && (lock)->break_lock)	\
-			cpu_relax();					\
+			_raw_##op##_relax(&lock->raw_lock);		\
 	}								\
 	(lock)->break_lock = 0;						\
 }									\
@@ -248,7 +248,7 @@ unsigned long __lockfunc _##op##_lock_ir
 		if (!(lock)->break_lock)				\
 			(lock)->break_lock = 1;				\
 		while (!op##_can_lock(lock) && (lock)->break_lock)	\
-			cpu_relax();					\
+			_raw_##op##_relax(&lock->raw_lock);		\
 	}								\
 	(lock)->break_lock = 0;						\
 	return flags;							\
_

Patches currently in -mm which might be from schwidefsky@xxxxxxxxxx are

origin.patch
git-s390.patch
reduce-max_nr_zones-remove-display-of-counters-for-unconfigured-zones-s390-fix.patch
reduce-max_nr_zones-remove-display-of-counters-for-unconfigured-zones-s390-fix-fix.patch
out-of-memory-notifier.patch
out-of-memory-notifier-tidy.patch
bootmem-use-max_dma_address-instead-of-low32limit.patch
own-header-file-for-struct-page.patch
convert-s390-page-handling-macros-to-functions.patch
convert-s390-page-handling-macros-to-functions-fix.patch
s390-fix-cmm-kernel-thread-handling.patch
make-touch_nmi_watchdog-imply-touch_softlockup_watchdog-on-fix.patch
simplify-update_times-avoid-jiffies-jiffies_64-aliasing-problem-2.patch
directed-yield-cpu_relax-variants-for-spinlocks-and-rw-locks.patch
directed-yield-direct-yield-of-spinlocks-for-powerpc.patch
directed-yield-direct-yield-of-spinlocks-for-s390.patch
kill-wall_jiffies.patch
generic-ioremap_page_range-implementation.patch
generic-ioremap_page_range-flush_cache_vmap.patch
generic-ioremap_page_range-s390-conversion.patch
s390-update-fs3270-to-use-a-struct-pid.patch
add-regs_return_value-helper.patch
introduce-kernel_execve.patch
rename-the-provided-execve-functions-to-kernel_execve.patch
provide-kernel_execve-on-all-architectures.patch
provide-kernel_execve-on-all-architectures-fix.patch
remove-the-use-of-_syscallx-macros-in-uml.patch
sh64-remove-the-use-of-kernel-syscalls.patch
remove-remaining-errno-and-__kernel_syscalls__-references.patch

-
To unsubscribe from this list: send the line "unsubscribe mm-commits" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Kernel Newbies FAQ]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Photo]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux