On 2016年06月02日 19:15, Peter Zijlstra wrote:
On Thu, Jun 02, 2016 at 07:01:17PM +0800, xinhui wrote:
On 2016年06月02日 18:44, Arnd Bergmann wrote:
On Thursday, June 2, 2016 6:09:08 PM CEST Pan Xinhui wrote:
diff --git a/include/asm-generic/qrwlock.h b/include/asm-generic/qrwlock.h
index 54a8e65..eadd7a3 100644
--- a/include/asm-generic/qrwlock.h
+++ b/include/asm-generic/qrwlock.h
@@ -139,7 +139,7 @@ static inline void queued_read_unlock(struct qrwlock *lock)
*/
static inline void queued_write_unlock(struct qrwlock *lock)
{
- smp_store_release((u8 *)&lock->cnts, 0);
+ (void)atomic_sub_return_release(_QW_LOCKED, &lock->cnts);
}
Isn't this more expensive than the existing version?
yes, a little more expensive than the existing version
Think 20+ cycles worse.
But does this is generic code, I am not sure how it will impact the performance on other archs.
As always, you get to audit users of stuff you change. And here you're
lucky, there's only 1.
yes, and hope there will be 2 :)
If you like
we calculate the correct address to set to NULL
say,
static inline void queued_write_unlock(struct qrwlock *lock)
{
u8 *wl = lock;
#ifdef __BIG_ENDIAN
wl += 3;
#endif
smp_store_release(wl, 0);
}
No, that's horrible. Either lift __qrwlock into qrwlock_types.h or do
what qspinlock does. And looking at that, we could make
agree.
queued_spin_unlock() use the atomic_sub_return_relaxed() thing too I
suppose, that generates slightly better code.
thanks for your suggestion.
I can have a try in queued_spin_unlock().
--
To unsubscribe from this list: send the line "unsubscribe linux-arch" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html