Johannes Berg wrote: > It seems that should be a rate control decision? Possibly taking into > account more than just always doing aggregation sessions. Then again, I > suppose aggregation sessions are cheap. What about latency here? > Well, that is what Luis seems to think too, but our RC doesn't do much now, so we try to setup an aggregation session with any associated STA. > "maintain minimum HW queue depth"? In what way? You mean put in enough > frames? I was talking about this: (txq->axq_depth < ATH_AGGR_MIN_QDEPTH) in ath_tx_sched_aggr()@xmit.c > > On TX Completion, > > > > * Process all TX queues > > * Process all complete descriptors. > > * Complete all sub-frames of an aggregate that were ACKed (send status to mac80211). > > * Re-queue sub-frames that were not ACKed back to the TID's pending queue. > > * Schedule this TID for processing. > > Those have to go in front of the queue, right? So they're sent out next? Yep, they are spliced back to the beginning of the queue. > > > * Run through all scheduled TIDs > > * Form aggregates from the pending buffers and send them out. > > ( Again, maintain minimum HW depth ) > > Which TIDs are "scheduled"? All of the TIDs that have pending buffers. > > So, aggregation is currently done on a need-to basis, and changing this > > to a flow where mac80211 sends down frames with A-MPDU related control information > > would mean a complete rewrite of ath9k's TX path. :-) > > So what? :) I'm trying to avoid having to do all this again and again in > b43, rt2x00 etc. The hw really behaves very similarly. > Agreed. > > Well, I really don't know how this would affect performance, but > > I think this _might_ be a better model. > > Where would you see it have a noticeable effect on performance? How would mac80211 buffer frames ? Would it wait for enough sub-frames to fill an A-MPDU ? When does it decide that buffered frames have to be pushed down to the driver ? Sujith -- To unsubscribe from this list: send the line "unsubscribe linux-wireless" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html