4.17-stable review patch. If anyone has any objections, please let me know. ------------------ From: Daniel Borkmann <daniel@xxxxxxxxxxxxx> [ Upstream commit 144fe2bfd236dc814eae587aea7e2af03dbdd755 ] Current sg coalescing logic in sk_alloc_sg() (latter is used by tls and sockmap) is not quite correct in that we do fetch the previous sg entry, however the subsequent check whether the refilled page frag from the socket is still the same as from the last entry with prior offset and length matching the start of the current buffer is comparing always the first sg list entry instead of the prior one. Fixes: 3c4d7559159b ("tls: kernel TLS support") Signed-off-by: Daniel Borkmann <daniel@xxxxxxxxxxxxx> Acked-by: Dave Watson <davejwatson@xxxxxx> Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- net/core/sock.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/net/core/sock.c +++ b/net/core/sock.c @@ -2270,9 +2270,9 @@ int sk_alloc_sg(struct sock *sk, int len pfrag->offset += use; sge = sg + sg_curr - 1; - if (sg_curr > first_coalesce && sg_page(sg) == pfrag->page && - sg->offset + sg->length == orig_offset) { - sg->length += use; + if (sg_curr > first_coalesce && sg_page(sge) == pfrag->page && + sge->offset + sge->length == orig_offset) { + sge->length += use; } else { sge = sg + sg_curr; sg_unmark_end(sge);