From: Peter Zijlstra <peterz@xxxxxxxxxxxxx> We need to make sure that all requeue operations take the hash bucket locks in the same order to avoid deadlock. Simplify the current double_lock_hb implementation by making sure hb1 is always the "smallest" bucket to avoid extra checks. Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Reviewed-by: André Almeida <andrealmeid@xxxxxxxxxxxxx> [André: Add commit description] Signed-off-by: André Almeida <andrealmeid@xxxxxxxxxxxxx> --- kernel/futex/futex.h | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h index fe847f57dad5..465f7bd5caef 100644 --- a/kernel/futex/futex.h +++ b/kernel/futex/futex.h @@ -239,14 +239,12 @@ extern int fixup_pi_owner(u32 __user *uaddr, struct futex_q *q, int locked); static inline void double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2) { - if (hb1 <= hb2) { - spin_lock(&hb1->lock); - if (hb1 < hb2) - spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING); - } else { /* hb1 > hb2 */ - spin_lock(&hb2->lock); - spin_lock_nested(&hb1->lock, SINGLE_DEPTH_NESTING); - } + if (hb1 > hb2) + swap(hb1, hb2); + + spin_lock(&hb1->lock); + if (hb1 != hb2) + spin_lock_nested(&hb2->lock, SINGLE_DEPTH_NESTING); } static inline void -- 2.33.0