Each per bucket lock covers a configurable number of buckets. While shrinking, two buckets in the old table contain entries for a single bucket in the new table. We need to lock down both while linking. Check if they are protected by different locks to avoid a recursive lock. Fixes: 97defe1e ("rhashtable: Per bucket locks & deferred expansion/shrinking") Reported-by: Fengguang Wu <fengguang.wu@xxxxxxxxx> Signed-off-by: Thomas Graf <tgraf@xxxxxxx> --- lib/rhashtable.c | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/lib/rhashtable.c b/lib/rhashtable.c index 8023b55..45477f7 100644 --- a/lib/rhashtable.c +++ b/lib/rhashtable.c @@ -443,8 +443,16 @@ int rhashtable_shrink(struct rhashtable *ht) new_bucket_lock = bucket_lock(new_tbl, new_hash); spin_lock_bh(old_bucket_lock1); - spin_lock_bh_nested(old_bucket_lock2, RHT_LOCK_NESTED); - spin_lock_bh_nested(new_bucket_lock, RHT_LOCK_NESTED2); + + /* Depending on the lock per buckets mapping, the bucket in + * the lower and upper region may map to the same lock. + */ + if (old_bucket_lock1 != old_bucket_lock2) { + spin_lock_bh_nested(old_bucket_lock2, RHT_LOCK_NESTED); + spin_lock_bh_nested(new_bucket_lock, RHT_LOCK_NESTED2); + } else { + spin_lock_bh_nested(new_bucket_lock, RHT_LOCK_NESTED); + } rcu_assign_pointer(*bucket_tail(new_tbl, new_hash), tbl->buckets[new_hash]); @@ -452,7 +460,8 @@ int rhashtable_shrink(struct rhashtable *ht) tbl->buckets[new_hash + new_tbl->size]); spin_unlock_bh(new_bucket_lock); - spin_unlock_bh(old_bucket_lock2); + if (old_bucket_lock1 != old_bucket_lock2) + spin_unlock_bh(old_bucket_lock2); spin_unlock_bh(old_bucket_lock1); } -- 1.9.3 -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html