4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Daniel Borkmann <daniel@xxxxxxxxxxxxx> [ Upstream commit 144fe2bfd236dc814eae587aea7e2af03dbdd755 ] Current sg coalescing logic in sk_alloc_sg() (latter is used by tls and sockmap) is not quite correct in that we do fetch the previous sg entry, however the subsequent check whether the refilled page frag from the socket is still the same as from the last entry with prior offset and length matching the start of the current buffer is comparing always the first sg list entry instead of the prior one. Fixes: 3c4d7559159b ("tls: kernel TLS support") Signed-off-by: Daniel Borkmann <daniel@xxxxxxxxxxxxx> Acked-by: Dave Watson <davejwatson@xxxxxx> Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- net/tls/tls_sw.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) --- a/net/tls/tls_sw.c +++ b/net/tls/tls_sw.c @@ -135,9 +135,10 @@ static int alloc_sg(struct sock *sk, int pfrag->offset += use; sge = sg + num_elem - 1; - if (num_elem > first_coalesce && sg_page(sg) == pfrag->page && - sg->offset + sg->length == orig_offset) { - sg->length += use; + + if (num_elem > first_coalesce && sg_page(sge) == pfrag->page && + sge->offset + sge->length == orig_offset) { + sge->length += use; } else { sge++; sg_unmark_end(sge);