On Mon, Aug 29, 2011 at 2:02 PM, Luis R. Rodriguez <mcgrof@xxxxxxxxx> wrote: > Hope this helps sum up the issue for 802.11 and what we are faced with. I should elaborate a bit more here on ensuring people understand that the "bufferbloat" issue assumes simply not retrying frames is a good thing. This is incorrect. TCP's congestion algorithm is designed to help with the network conditions, not the dynamic PHY conditions. The dyanmic PHY conditions are handled through a slew of different means: * Rate control * Adaptive Noise Immunity effort Rate control is addressed either in firmware or by the driver. Typically rate control algorithms use some sort of metrics to do best guess at what rate a frame should be transmitted at. Minstrel was the first to say -- ahhh the hell with it, I give up and simply do trial and error and keep using the most reliable one but keep testing different rates as you go on. You fixate on the best one by using EWMA. What I was arguing early was that perhaps the same approach can be taken for the latency issues under the assumption the resolution is queue size and software retries. In fact this same principle might be able to be applicable to the aggregation segment size as well. Now, ANI is specific to hardware and does adjustments on hardware based on some known metrics. Today we have fixed thresholds for these but I wouldn't be surprised if taking minstrel-like guesses and doing trial and error and EWMA based fixation would help here as well. Luis -- To unsubscribe from this list: send the line "unsubscribe linux-wireless" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html