On Tue, Feb 15, 2011 at 7:39 PM, Johannes Berg <johannes@xxxxxxxxxxxxxxxx> wrote: > On Tue, 2011-02-15 at 19:34 +0530, Vivek Natarajan wrote: >> On Tue, Feb 15, 2011 at 6:14 PM, Johannes Berg >> <johannes@xxxxxxxxxxxxxxxx> wrote: >> > On Tue, 2011-02-08 at 15:43 +0530, Vivek Natarajan wrote: >> > >> >> > Maybe the subif queues should be stopped, then flush, then tx nullfunc, >> >> > then stop all queues to configure the HW or something like that? >> >> >> >> I tried this sequence: >> >> the subif queues stopped, then flush, then tx nullfunc, and wake subif >> >> queues,(we cannot have the queues stopped till we receive tx_status >> >> because nullfunc might have failed during tx path itself and mac80211 >> >> will not receive tx_status) >> > >> > I've recently been thinking about this -- I'm thinking that maybe we >> > should change this behaviour. Right now the tx() routine basically >> > always returns OK (except in at76) and I suppose instead it could return >> > whether the frame was queued up successfully... >> > >> >> After some time interval, once again stop queues on receiving ack for >> >> nullfunc, configure the hw and then wake up queues. So, during the >> >> above time interval, there is a race of queuing a frame to the hw. I >> >> have tested this and the issue is quickly reproducible. >> > >> > Right. The code that sends the nullfunc -- I think it can probably >> > sleep? If so, instead of starting the queue for the TX it could just >> > flush TX again after sending the nullfunc -- after that either it had >> > status for the frame, or the frame was dropped, no? >> >> So, this requires changing all the drivers to return the status for >> tx() routine. > > Not necessarily. I think that if you do the flush() based approach, then > mac80211 can infer that the frame was dropped by not getting a TX status > during flush, right? Assuming there's reliable TX status to start with. Thanks. I will give this a try. Vivek. -- To unsubscribe from this list: send the line "unsubscribe linux-wireless" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html