On Wed, 2008-07-23 at 13:14 -0700, David Miller wrote: > From: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > Date: Wed, 23 Jul 2008 12:58:16 +0200 > > > So I guess my question is, is netif_tx_lock() here to stay, or is the > > right fix to convert all those drivers to use __netif_tx_lock() which > > locks only a single queue? > > It's staying. > > It's trying to block all potential calls into the ->hard_start_xmit() > method of the driver, and the only reliable way to do that is to take > all the TX queue locks. And in one form or another, we're going to > have this "grab/release all the TX queue locks" construct. > > I find it interesting that this cannot be simply described to lockdep > :-) If you think its OK to take USHORT_MAX locks at once, I'm afraid we'll have to agree to disagree :-/ Thing is, lockdep wants to be able to describe the locking hierarchy with classes, and each class needs to be in static storage for various reasons. So if you make a locking hierarchy that is USHORT_MAX deep, you need at least that amount of static classes. Also, you'll run into the fact that lockdep will only track like 48 held locks, after that it self terminates. I'm aware of only 2 sites in the kernel that break this limit. The down-side of stretching this limit is that deep lock chains come with costs (esp so on -rt), so I'm not particularly eager to grow this - it might give the impresssion its a good idea to have very long lock chains. -- To unsubscribe from this list: send the line "unsubscribe linux-wireless" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html