Search Linux Wireless

Re: [PATCH] mac80211: Fix deadlock in ieee80211_do_stop.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, Ben.

On 12/09/2010 11:23 PM, Ben Greear wrote:
> I saw a brief hang today, and did a sysrq-t, and then saw the timer
> printout you added here.  But, I think that was caused by sysrq-t.
> The system recovered and ran fine.

It would be nice if you turn on printk timestamp.  How brief is brief?
Can you please turn on printk timestamp?  @115200, the dump would have
taken ~25 seconds so yes it was mostly caused by sysrq-t dump.  In the
dump, iface_work is at the same position in R state.  It looks like
the ifmgd->mtx.  Can you please confirm this with gdb?  This would
only happen if the lock is highly contended.  Would this be the case
Johaness?

> The second time (after several hours of rebooting), the hang was worse
> and the system ran OOM after maybe 30 seconds.  I did a sysrq-t then.
> 
> I see quite a few printouts from your debug message, but all of them
> after things start going OOM, and after sysrq-t.
> 
> Here's the console capture:
> 
> http://www.candelatech.com/~greearb/minicom_ath9k_log4.txt
> 
> Let me know if you need more traces like this if I hit it again.

I don't know the code very well but it looks very suspicious.  A task
ends up trying to flush a work which can run for extended period of
time during which memory is aggressively used for buffering (looks
like skb's are piling up without any limit), which is likely to
further slow down other stuff.  This sounds like an extremely fragile
mechanism to me.  When the work is being constantly being rescheduled,
cancel ends up waiting one fewer time then flush.  If the work is
running and pending, flush waits for the pending one to finish, while
cancel would kill the pending one and waits for only the current one
to finish.  I think it could be that that difference is acting as a
threshold between going bonkers and staying alive.

Can you please test whether the following patch makes any difference?
If flush_work() is misbehaving, the following wouldn't fix anything
but if this livelock is indeed caused by iface_work running too long,
the problem should go away.

One way or the other, Johannes, please consider fixing the behavior
here.  It's way too fragile.

Thanks.

diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
index 7aa8559..86bdfdd 100644
--- a/net/mac80211/iface.c
+++ b/net/mac80211/iface.c
@@ -723,6 +723,7 @@ static void ieee80211_iface_work(struct work_struct *work)
 	struct sk_buff *skb;
 	struct sta_info *sta;
 	struct ieee80211_ra_tid *ra_tid;
+	unsigned int cnt = 0;

 	if (!ieee80211_sdata_running(sdata))
 		return;
@@ -825,6 +826,11 @@ static void ieee80211_iface_work(struct work_struct *work)
 		}

 		kfree_skb(skb);
+
+		if (++cnt > 100) {
+			ieee80211_queue_work(&local->hw, work);
+			break;
+		}
 	}

 	/* then other type-dependent work */
--
To unsubscribe from this list: send the line "unsubscribe linux-wireless" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Host AP]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Linux Kernel]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]
  Powered by Linux