On Thu, Feb 22, 2024 at 10:05 PM Richard Gobert <richardbgobert@xxxxxxxxx> wrote: > > Commits a602456 ("udp: Add GRO functions to UDP socket") and 57c67ff ("udp: > additional GRO support") introduce incorrect usage of {ip,ipv6}_hdr in the > complete phase of gro. The functions always return skb->network_header, > which in the case of encapsulated packets at the gro complete phase, is > always set to the innermost L3 of the packet. That means that calling > {ip,ipv6}_hdr for skbs which completed the GRO receive phase (both in > gro_list and *_gro_complete) when parsing an encapsulated packet's _outer_ > L3/L4 may return an unexpected value. > > This incorrect usage leads to a bug in GRO's UDP socket lookup. > udp{4,6}_lib_lookup_skb functions use ip_hdr/ipv6_hdr respectively. These > *_hdr functions return network_header which will point to the innermost L3, > resulting in the wrong offset being used in __udp{4,6}_lib_lookup with > encapsulated packets. > > Reproduction example: > > Endpoint configuration example (fou + local address bind) > > # ip fou add port 6666 ipproto 4 > # ip link add name tun1 type ipip remote 2.2.2.1 local 2.2.2.2 encap fou encap-dport 5555 encap-sport 6666 mode ipip > # ip link set tun1 up > # ip a add 1.1.1.2/24 dev tun1 > > Netperf TCP_STREAM result on net-next before patch is applied: > > net-next main, GRO enabled: > $ netperf -H 1.1.1.2 -t TCP_STREAM -l 5 > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 131072 16384 16384 5.28 2.37 > > net-next main, GRO disabled: > $ netperf -H 1.1.1.2 -t TCP_STREAM -l 5 > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 131072 16384 16384 5.01 2745.06 > > patch applied, GRO enabled: > $ netperf -H 1.1.1.2 -t TCP_STREAM -l 5 > Recv Send Send > Socket Socket Message Elapsed > Size Size Size Time Throughput > bytes bytes bytes secs. 10^6bits/sec > > 131072 16384 16384 5.01 2877.38 > > This patch fixes this bug and prevents similar future misuse of > network_header by setting network_header and inner_network_header to their > respective values during the receive phase of GRO. This results in > more coherent {inner_,}network_header values for every skb in gro_list, > which also means there's no need to set/fix these values before passing > the packet forward. > > network_header is already set in dev_gro_receive and under encapsulation we > set inner_network_header. *_gro_complete functions use a new helper > function - skb_gro_complete_network_header, which returns the > network_header/inner_network_header offset during the GRO complete phase, > depending on skb->encapsulation. > > Fixes: 57c67ff4bd92 ("udp: additional GRO support") > Signed-off-by: Richard Gobert <richardbgobert@xxxxxxxxx> > --- > include/net/gro.h | 14 +++++++++++++- > net/8021q/vlan_core.c | 3 +++ > net/ipv4/af_inet.c | 8 ++++---- > net/ipv4/tcp_offload.c | 2 +- > net/ipv4/udp_offload.c | 2 +- > net/ipv6/ip6_offload.c | 11 +++++------ > net/ipv6/tcpv6_offload.c | 2 +- > net/ipv6/udp_offload.c | 2 +- > 8 files changed, 29 insertions(+), 15 deletions(-) > > diff --git a/include/net/gro.h b/include/net/gro.h > index b435f0ddbf64..89502a7e35ed 100644 > --- a/include/net/gro.h > +++ b/include/net/gro.h > @@ -177,10 +177,22 @@ static inline void *skb_gro_header(struct sk_buff *skb, > return ptr; > } > > +static inline int skb_gro_network_offset(struct sk_buff *skb) > +{ > + return NAPI_GRO_CB(skb)->encap_mark ? skb_inner_network_offset(skb) : > + skb_network_offset(skb); > +} > + > static inline void *skb_gro_network_header(struct sk_buff *skb) > { > return (NAPI_GRO_CB(skb)->frag0 ?: skb->data) + > - skb_network_offset(skb); > + skb_gro_network_offset(skb); > +} > + > +static inline void *skb_gro_complete_network_header(struct sk_buff *skb) > +{ > + return skb->encapsulation ? skb_inner_network_header(skb) : > + skb_network_header(skb); > } > > static inline __wsum inet_gro_compute_pseudo(struct sk_buff *skb, int proto) > diff --git a/net/8021q/vlan_core.c b/net/8021q/vlan_core.c > index f00158234505..8bc871397e47 100644 > --- a/net/8021q/vlan_core.c > +++ b/net/8021q/vlan_core.c > @@ -478,6 +478,9 @@ static struct sk_buff *vlan_gro_receive(struct list_head *head, > if (unlikely(!vhdr)) > goto out; > > + if (!NAPI_GRO_CB(skb)->encap_mark) > + skb_set_network_header(skb, hlen); > + > type = vhdr->h_vlan_encapsulated_proto; > > ptype = gro_find_receive_by_type(type); > diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c > index 835f4f9d98d2..c0f3c162bf73 100644 > --- a/net/ipv4/af_inet.c > +++ b/net/ipv4/af_inet.c > @@ -1564,7 +1564,9 @@ struct sk_buff *inet_gro_receive(struct list_head *head, struct sk_buff *skb) > > NAPI_GRO_CB(skb)->is_atomic = !!(iph->frag_off & htons(IP_DF)); > NAPI_GRO_CB(skb)->flush |= flush; > - skb_set_network_header(skb, off); > + if (NAPI_GRO_CB(skb)->encap_mark) > + skb_set_inner_network_header(skb, off); > + > /* The above will be needed by the transport layer if there is one > * immediately following this IP hdr. > */ > @@ -1643,10 +1645,8 @@ int inet_gro_complete(struct sk_buff *skb, int nhoff) > int proto = iph->protocol; > int err = -ENOSYS; > > - if (skb->encapsulation) { > + if (skb->encapsulation) > skb_set_inner_protocol(skb, cpu_to_be16(ETH_P_IP)); > - skb_set_inner_network_header(skb, nhoff); > - } > > iph_set_totlen(iph, skb->len - nhoff); > csum_replace2(&iph->check, totlen, iph->tot_len); > diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c > index 8311c38267b5..8bbcd3f502ac 100644 > --- a/net/ipv4/tcp_offload.c > +++ b/net/ipv4/tcp_offload.c > @@ -330,7 +330,7 @@ struct sk_buff *tcp4_gro_receive(struct list_head *head, struct sk_buff *skb) > > INDIRECT_CALLABLE_SCOPE int tcp4_gro_complete(struct sk_buff *skb, int thoff) > { > - const struct iphdr *iph = ip_hdr(skb); > + const struct iphdr *iph = skb_gro_complete_network_header(skb); > struct tcphdr *th = tcp_hdr(skb); > > th->check = ~tcp_v4_check(skb->len - thoff, iph->saddr, > diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c > index 6c95d28d0c4a..7f59cede67f5 100644 > --- a/net/ipv4/udp_offload.c > +++ b/net/ipv4/udp_offload.c > @@ -709,7 +709,7 @@ EXPORT_SYMBOL(udp_gro_complete); > > INDIRECT_CALLABLE_SCOPE int udp4_gro_complete(struct sk_buff *skb, int nhoff) > { > - const struct iphdr *iph = ip_hdr(skb); > + const struct iphdr *iph = skb_gro_complete_network_header(skb); > struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); > > /* do fraglist only if there is no outer UDP encap (or we already processed it) */ > diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c > index cca64c7809be..db7e3db587b9 100644 > --- a/net/ipv6/ip6_offload.c > +++ b/net/ipv6/ip6_offload.c > @@ -67,7 +67,7 @@ static int ipv6_gro_pull_exthdrs(struct sk_buff *skb, int off, int proto) > off += len; > } > > - skb_gro_pull(skb, off - skb_network_offset(skb)); > + skb_gro_pull(skb, off - skb_gro_network_offset(skb)); > return proto; > } > > @@ -236,7 +236,8 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *ipv6_gro_receive(struct list_head *head, > if (unlikely(!iph)) > goto out; > > - skb_set_network_header(skb, off); > + if (NAPI_GRO_CB(skb)->encap_mark) > + skb_set_inner_network_header(skb, off); > > flush += ntohs(iph->payload_len) != skb->len - hlen; > > @@ -259,7 +260,7 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *ipv6_gro_receive(struct list_head *head, > NAPI_GRO_CB(skb)->proto = proto; > > flush--; > - nlen = skb_network_header_len(skb); > + nlen = skb_gro_offset(skb) - off; > > list_for_each_entry(p, head, list) { > const struct ipv6hdr *iph2; > @@ -353,10 +354,8 @@ INDIRECT_CALLABLE_SCOPE int ipv6_gro_complete(struct sk_buff *skb, int nhoff) > int err = -ENOSYS; > u32 payload_len; > > - if (skb->encapsulation) { > + if (skb->encapsulation) > skb_set_inner_protocol(skb, cpu_to_be16(ETH_P_IPV6)); > - skb_set_inner_network_header(skb, nhoff); > - } > > payload_len = skb->len - nhoff - sizeof(*iph); > if (unlikely(payload_len > IPV6_MAXPLEN)) { > diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c > index bf0c957e4b5e..79eeaced2834 100644 > --- a/net/ipv6/tcpv6_offload.c > +++ b/net/ipv6/tcpv6_offload.c > @@ -29,7 +29,7 @@ struct sk_buff *tcp6_gro_receive(struct list_head *head, struct sk_buff *skb) > > INDIRECT_CALLABLE_SCOPE int tcp6_gro_complete(struct sk_buff *skb, int thoff) > { > - const struct ipv6hdr *iph = ipv6_hdr(skb); > + const struct ipv6hdr *iph = skb_gro_complete_network_header(skb); > struct tcphdr *th = tcp_hdr(skb); > > th->check = ~tcp_v6_check(skb->len - thoff, &iph->saddr, > diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c > index 6b95ba241ebe..897caa2e39fb 100644 > --- a/net/ipv6/udp_offload.c > +++ b/net/ipv6/udp_offload.c > @@ -164,7 +164,7 @@ struct sk_buff *udp6_gro_receive(struct list_head *head, struct sk_buff *skb) > > INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int nhoff) > { > - const struct ipv6hdr *ipv6h = ipv6_hdr(skb); > + const struct ipv6hdr *ipv6h = skb_gro_complete_network_header(skb); > struct udphdr *uh = (struct udphdr *)(skb->data + nhoff); My intuition is that this patch has a high cost for normal GRO processing. SW-GRO is already a bottleneck on ARM cores in smart NICS. I would suggest instead using parameters to give both the nhoff and thoff values this would avoid many conditionals in the fast path. -> INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int nhoff, int thoff) { const struct ipv6hdr *ipv6h = (const struct ipv6hdr *)(skb->data + nhoff); struct udphdr *uh = (struct udphdr *)(skb->data + thoff); ... } INDIRECT_CALLABLE_SCOPE int tcp6_gro_complete(struct sk_buff *skb, int nhoff, int thoff) { const struct ipv6hdr *iph = (const struct ipv6hdr *)(skb->data + nhoff); struct tcphdr *th = (struct tcphdr *)(skb->data + thoff); Why storing in skb fields things that really could be propagated more efficiently as function parameters ?