On Wed, Nov 18, 2020 at 8:29 AM Jakub Kicinski <kuba@xxxxxxxxxx> wrote: > > On Mon, 16 Nov 2020 17:15:47 +0800 Xin Long wrote: > > This patch is to let it always do CRC checksum in sctp_gso_segment() > > by removing CRC flag from the dev features in gre_gso_segment() for > > SCTP over GRE, just as it does in Commit 527beb8ef9c0 ("udp: support > > sctp over udp in skb_udp_tunnel_segment") for SCTP over UDP. > > > > It could set csum/csum_start in GSO CB properly in sctp_gso_segment() > > after that commit, so it would do checksum with gso_make_checksum() > > in gre_gso_segment(), and Commit 622e32b7d4a6 ("net: gre: recompute > > gre csum for sctp over gre tunnels") can be reverted now. > > > > Signed-off-by: Xin Long <lucien.xin@xxxxxxxxx> > > Makes sense, but does GRE tunnels don't always have a csum. Do you mean the GRE csum can be offloaded? If so, it seems for GRE tunnel we need the similar one as: commit 4bcb877d257c87298aedead1ffeaba0d5df1991d Author: Tom Herbert <therbert@xxxxxxxxxx> Date: Tue Nov 4 09:06:52 2014 -0800 udp: Offload outer UDP tunnel csum if available I will confirm and implement it in another patch. > > Is the current hardware not capable of generating CRC csums over > encapsulated patches at all? There is, but very rare. The thing is after doing CRC csum, the outer GRE/UDP checksum will have to be recomputed, as it's NOT zero after all fields for CRC scum are summed, which is different from the common checksum. So if it's a GRE/UDP tunnel, the inner CRC csum has to be done there even if the HW supports its offload. > > I guess UDP tunnels can be configured without the csums as well > so the situation isn't much different. > > > diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c > > index e0a2465..a5935d4 100644 > > --- a/net/ipv4/gre_offload.c > > +++ b/net/ipv4/gre_offload.c > > @@ -15,12 +15,12 @@ static struct sk_buff *gre_gso_segment(struct sk_buff *skb, > > netdev_features_t features) > > { > > int tnl_hlen = skb_inner_mac_header(skb) - skb_transport_header(skb); > > - bool need_csum, need_recompute_csum, gso_partial; > > struct sk_buff *segs = ERR_PTR(-EINVAL); > > u16 mac_offset = skb->mac_header; > > __be16 protocol = skb->protocol; > > u16 mac_len = skb->mac_len; > > int gre_offset, outer_hlen; > > + bool need_csum, gso_partial; > > Nit, rev xmas tree looks broken now. Will fix it in v2, :D Thanks. > > > if (!skb->encapsulation) > > goto out; > > @@ -41,10 +41,10 @@ static struct sk_buff *gre_gso_segment(struct sk_buff *skb, > > skb->protocol = skb->inner_protocol; > > > > need_csum = !!(skb_shinfo(skb)->gso_type & SKB_GSO_GRE_CSUM); > > - need_recompute_csum = skb->csum_not_inet; > > skb->encap_hdr_csum = need_csum; > > > > features &= skb->dev->hw_enc_features; > > + features &= ~NETIF_F_SCTP_CRC; > > > > /* segment inner packet. */ > > segs = skb_mac_gso_segment(skb, features); > > @@ -99,15 +99,7 @@ static struct sk_buff *gre_gso_segment(struct sk_buff *skb, > > } > > > > *(pcsum + 1) = 0; > > - if (need_recompute_csum && !skb_is_gso(skb)) { > > - __wsum csum; > > - > > - csum = skb_checksum(skb, gre_offset, > > - skb->len - gre_offset, 0); > > - *pcsum = csum_fold(csum); > > - } else { > > - *pcsum = gso_make_checksum(skb, 0); > > - } > > + *pcsum = gso_make_checksum(skb, 0); > > } while ((skb = skb->next)); > > out: > > return segs; >