On Wed, May 6, 2015 at 4:15 PM, Johannes Berg <johannes@xxxxxxxxxxxxxxxx> wrote: > To revive an old thread - sorry... > >> >> Fix it by cancelling hw_roc_done on roc cancel. >> >> Since hw_roc_done takes local->mtx, temporarily >> >> release it (before re-acquiring it and starting >> >> the next roc if needed). >> > >> > I can't say I like this, and I'm not even sure it doesn't leave a race? >> > After all, the work will not take the RTNL. >> > >> the other way (cancel while hw_roc_done) is protected, since >> ieee80211_cancel_roc() looks for a specific roc cookie. > > But you're saying that we rely on RTNL to not modify the list, and the > hw_roc_done work doesn't use the RTNL? Hmm. Are you saying now that it > also checks that it was started? But then why do we even have this > issue? > sorry, i'll have to recall the exact details... > Or I can see why we have this issue, but perhaps we could fix it by > flushing the work *after* we cancel, so in case the driver was > cancelling at the same time we'd not yet have started the next one? > that's basically what i did - canceled the work after drv_cancel_remain_on_channel() was called. >> > I think the only way to really fix that is to make the driver return the >> > roc pointer or so and then we can either queue a work struct per roc >> > struct, or at least mark the roc struct as driver-destroyed and >> > double-check that in the work function? >> > >> that was my initial implementation. but then i thought it will be >> nicer to implement it in mac80211 and avoid changing all the drivers >> (with the need to save the cookie, etc.) >> do you prefer adding such cookie/pointer param? > > How difficult was it for the driver(s)? It seems it would make the code > far easier to reason about, which can only be a good thing. We'd save > the whole race discussion above :) > sure. let's forget about this discussion and go with the clearer solution :) Eliad. -- To unsubscribe from this list: send the line "unsubscribe linux-wireless" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html