This patch fixes the following RCU debug splat: =============================== [ INFO: suspicious RCU usage. ] 3.9.0-rc8-wl+ #31 Tainted: G O ------------------------------- net/mac80211/rate.c:691 suspicious rcu_dereference_check() usage! other info that might help us debug this: rcu_scheduler_active = 1, debug_locks = 1 3 locks held by hostapd/9451: #0: (genl_mutex){+.+.+.}, at: [<c1326365>] genl_lock+0xf/0x11 #1: (rtnl_mutex){+.+.+.}, at: [<c13133c4>] rtnl_lock+0xf/0x11 #2: (&rdev->mtx){+.+.+.}, at: [<f853395e>] nl80211_pre_doit+0x166/0x180 [cfg80211] stack backtrace: Pid: 9451, comm: hostapd Tainted: G O 3.9.0-rc8-wl+ #31 Call Trace: [<c107da0b>] lockdep_rcu_suspicious+0xe6/0xee [<f8bf82ad>] rate_control_set_rates+0x43/0x5a [mac80211] [<f8c2cacb>] minstrel_update_rates+0xdc/0xe2 [mac80211] [<f8c2cfb0>] minstrel_rate_init+0x24c/0x33d [mac80211] [<f8c2d9d3>] minstrel_ht_update_caps+0x206/0x234 [mac80211] [<c1080a8d>] ? lock_release+0x1c9/0x226 [<f8c2da25>] minstrel_ht_rate_init+0x10/0x14 [mac80211] [...] Signed-off-by: Christian Lamparter <chunkeey@xxxxxxxxxxxxxx> --- Actually, rcu_read_lock() might not be necessary in this special case [the RC is not yet initialized, so nothing bad can happen]. But, since the rcu_read_lock() has a low overhead and rate_control_set_rates mac80211.h doc does not mention anything about locking, I think this is a viable way. --- diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c index 0d51877..615d3a8 100644 --- a/net/mac80211/rate.c +++ b/net/mac80211/rate.c @@ -688,11 +688,15 @@ int rate_control_set_rates(struct ieee80211_hw *hw, struct ieee80211_sta *pubsta, struct ieee80211_sta_rates *rates) { - struct ieee80211_sta_rates *old = rcu_dereference(pubsta->rates); + struct ieee80211_sta_rates *old; + + rcu_read_lock(); + old = rcu_dereference(pubsta->rates); rcu_assign_pointer(pubsta->rates, rates); if (old) kfree_rcu(old, rcu_head); + rcu_read_unlock(); return 0; } -- To unsubscribe from this list: send the line "unsubscribe linux-wireless" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html