On Fri, Jan 07, 2011 at 06:16:04AM +0530, greearb@xxxxxxxxxxxxxxx wrote: > From: Ben Greear <greearb@xxxxxxxxxxxxxxx> > > We should not get to this state, but we do. What is > worse, many times the xmit logic still will not start, > probably due to tids being paused when they shouldn't be. > > Signed-off-by: Ben Greear <greearb@xxxxxxxxxxxxxxx> > --- > > NOTE: This needs review. It might be too much of a hack > for upstream code, and at best it works around a small part > of the problem. > > :100644 100644 3aae523... 547fb44... M drivers/net/wireless/ath/ath9k/xmit.c > drivers/net/wireless/ath/ath9k/xmit.c | 21 +++++++++++++++++++++ > 1 files changed, 21 insertions(+), 0 deletions(-) > > diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c > index 3aae523..547fb44 100644 > --- a/drivers/net/wireless/ath/ath9k/xmit.c > +++ b/drivers/net/wireless/ath/ath9k/xmit.c > @@ -2110,6 +2110,27 @@ static void ath_tx_complete_poll_work(struct work_struct *work) > } else { > txq->axq_tx_inprogress = true; > } > + } else { > + /* If the queue has pending buffers, then it > + * should be doing tx work (and have axq_depth). > + * Shouldn't get to this state I think..but > + * perhaps we do. > + */ > + if (!list_empty(&txq->axq_acq)) { > + ath_err(ath9k_hw_common(sc->sc_ah), > + "txq: %p axq_qnum: %i," > + " axq_link: %p" > + " pending frames: %i" > + " axq_acq is not empty, but" > + " axq_depth is zero. Calling" > + " ath_txq_schedule to restart" > + " tx logic.\n", > + txq, txq->axq_qnum, > + txq->axq_link, > + txq->pending_frames); > + ATH_DBG_WARN_ON_ONCE(1); > + ath_txq_schedule(sc, txq); NAK. This complete work monitors the hw q periodically and does a reset if a hang is detected. This work is no way meant to schedule aggr. This change really does not make any sense. Scheduling a tid periodically would introduce reordering issues especially when there are more retries. Vasanth -- To unsubscribe from this list: send the line "unsubscribe linux-wireless" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html