On Wed, 2019-04-17 at 10:17 +0100, Toke Høiland-Jørgensen wrote: > Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> writes: > > > On Tue, Apr 16, 2019 at 02:18:36PM +0100, Toke Høiland-Jørgensen wrote: > > > > > > > The congestion control happens at two levels. You are right that the > > > > socket buffer acts as one limit. However, other applications may also > > > > rely on the TX queue being full as the throttle (by setting a > > > > sufficiently large socket buffer size). > > > > > > Do you happen to have an example of an application that does this that > > > could be used for testing? :) > > > > Have a look at > > > > commit 6ce9e7b5fe3195d1ae6e3a0753d4ddcac5cd699e > > Author: Eric Dumazet <eric.dumazet@xxxxxxxxx> > > Date: Wed Sep 2 18:05:33 2009 -0700 > > > > ip: Report qdisc packet drops > > > > You should be able to do a UDP flood while setting IP_RECVERR to > > detect the packet drop due to a full queue which AFAICS will never > > happen with the current mac80211 setup. > > Also, looking at udp.c, it seems it uses net_xmit_errno() - which means > that returning NET_XMIT_CN has the same effect as NET_XMIT_SUCCESS when > propagated back to userspace? Which would kinda defeat the point of > going to the trouble of propagating up the return code (the mac80211 > queue will never drop the most recently enqueued packet)... I guess there might be value in returning NET_XMIT_CN anyway, but I think you're right in that we can never return anything but NET_XMIT_SUCCESS or NET_XMIT_CN since we never drop this new packet, just older ones. Which, btw, is exactly the same with net/sched/sch_fq_codel.c, AFAICT? johannes