Hi Andi, [...] > + spin_lock_irqsave(&priv_tmp->wmm.ra_list_spinlock, flags); > + BUG_ON(atomic_read(&priv_tmp->wmm.tx_pkts_queued)); > + spin_unlock_irqrestore(&priv_tmp->wmm.ra_list_spinlock, flags); > + > /* No packet at any TID for this priv. Mark as such > * to skip checking TIDs for this priv (until pkt is > * added). > atomic_set(hqp, NO_PKT_PRIO_TID); > > > Which crashed. Hence searching for queued packets and adding new ones is > not synchronized, new packets can be added while searching the WMM > queues. If a packet is added right before setting max prio to NO_PKT, > that packet is trapped and creates an infinite loop. > > Because of the new packet tx_pkts_queued is at least 1, indicating wmm > lists are not empty. Opposing that max prio is NO_PKT, which means "skip > this wmm queue, it has no packets". > The infinite loop results, because the main loop checks the wmm lists > for not empty (tx_pkts_queued != 0), but then finds no packet since it > skips the wmm queue where it is located on. This will never end, unless > a new packet is added which will restore max prio. Thanks for your analysis. > One possible solution is is to rely on tx_pkts_queued solely for > checking wmm queue to be empty, and drop the NO_PKT define. FYI, Yogesh suggested another fix (attached). [...] > seems to be intruduced with this patch: > 17e8cec 05-16-2011 mwifiex: CPU mips optimization with NO_PKT_PRIO_TID > > I was wondering why hasn't happened more frequently. Evtl. if the > interface is working in bridge mode, new packets might be added to the > WMM queue with the trapped packet. 2c > > I prepared a few patches, fixing above bug as suggested and plus some > cleanup patches I did while trying to get an understanding. Pls review. Thanks for the patches. We will review them and run some WMM tests. Thanks, Bing
Attachment:
hqp.diff
Description: hqp.diff