On 8.7.2021 11.43, Ikjoon Jang wrote: > When unlinked urbs are queued to the cancelled td list, many tds > might be located after hw dequeue pointer and just marked as no-op > but not reclaimed to num_trbs_free. This bias can leads to unnecessary > ring expansions and leaks in atomic pool. Good point, in that case trbs turned no-op never get added to free trb count. > > To prevent this bias, this patch counts free TRBs every time xhci moves > dequeue pointer. This patch utilizes existing > update_ring_for_set_deq_completion() function, renamed it to move_deq(). > > When it walks through to the new dequeue pointer, it also counts > free TRBs manually. This patch adds a fast path for the most cases > where the new dequeue pointer is still in the current segment. > This looks like an option. Another approach would be to keep the normal case fast, and the special case code simple. Something like: finish_td() ... /* Update ring dequeue pointer */ if (ep_ring->dequeue == td->first_trb) { ep_ring->dequeue = td->last_trb; ep_ring->deq_seg = td->last_trb_seg; ep_ring->num_trbs_free += td->num_trbs - 1; inc_deq(xhci, ep_ring); } else { move_deq(...); } move_deq(...) { while(ring->dequeue != new_dequeue) inc_deq(ring); inc_deq(ring); } inc_deq() increases the num_trbs_free count. I haven't looked at the details of this yet, but I'm away for the next two weeks so I wanted to share this first anyway. -Mathias