Richard Gobert wrote: > {inet,ipv6}_gro_receive functions perform flush checks (ttl, flags, > iph->id, ...) against all packets in a loop. These flush checks are used > currently only in tcp flows in GRO. > > These checks need to be done only once in tcp_gro_receive and only against > the found p skb, since they only affect flush and not same_flow. > > Levaraging the previous commit in the series, in which correct network > header offsets are saved for both outer and inner network headers - > allowing these checks to be done only once, in tcp_gro_receive. As a > result, NAPI_GRO_CB(p)->flush is not used at all. In addition - flush_id > checks are more declarative and contained in inet_gro_flush, thus removing > the need for flush_id in napi_gro_cb. > > This results in less parsing code for UDP flows and non-loop flush tests > for TCP flows. > > For example, running 40 IP/UDP netperf connections: > ./super_netperf.sh 40 -H 1.1.1.2 -t UDP_STREAM -l 120 > > Running perf top for 90s we can see that relatively less time is spent > on inet_gro_receive when GRO is not coalescing UDP: > > net-next: > 1.26% [kernel] [k] inet_gro_receive > > patch applied: > 0.85% [kernel] [k] inet_gro_receive > > udpgro_bench.sh single connection GRO improvement: > net-next: > 0.76% [kernel] [k] inet_gro_receive > > patch applied: > 0.61% [kernel] [k] inet_gro_receive > > Signed-off-by: Richard Gobert <richardbgobert@xxxxxxxxx> In v3 we discussed how the flush on network layer differences (like TTL or ToS) currently only affect the TCP GRO path, but should apply more broadly. We agreed that it is fine to leave that to a separate patch series. But seeing this patch, it introduces a lot of churn, but also makes it harder to address that issue for UDP, as it now moves network layer checks directly to the TCP code.