On Mon, Dec 06, 2021 at 05:35:36PM +0800, Tianjia Zhang wrote: > commit 5961060692f8b17cd2080620a3d27b95d2ae05ca upstream. > > When the TLS cipher suite uses CCM mode, including AES CCM and > SM4 CCM, the first byte of the B0 block is flags, and the real > IV starts from the second byte. The XOR operation of the IV and > rec_seq should be skip this byte, that is, add the iv_offset. > > Fixes: f295b3ae9f59 ("net/tls: Add support of AES128-CCM based ciphers") > Signed-off-by: Tianjia Zhang <tianjia.zhang@xxxxxxxxxxxxxxxxx> > Cc: Vakul Garg <vakul.garg@xxxxxxx> > Cc: stable@xxxxxxxxxxxxxxx # v5.2+ > Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx> > --- > net/tls/tls_sw.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c > index 122d5daed8b6..8cd011ea9fbb 100644 > --- a/net/tls/tls_sw.c > +++ b/net/tls/tls_sw.c > @@ -515,7 +515,7 @@ static int tls_do_encryption(struct sock *sk, > memcpy(&rec->iv_data[iv_offset], tls_ctx->tx.iv, > prot->iv_size + prot->salt_size); > > - xor_iv_with_seq(prot->version, rec->iv_data, tls_ctx->tx.rec_seq); > + xor_iv_with_seq(prot->version, rec->iv_data + iv_offset, tls_ctx->tx.rec_seq); > > sge->offset += prot->prepend_size; > sge->length -= prot->prepend_size; > @@ -1487,7 +1487,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb, > else > memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size); > > - xor_iv_with_seq(prot->version, iv, tls_ctx->rx.rec_seq); > + xor_iv_with_seq(prot->version, iv + iv_offset, tls_ctx->rx.rec_seq); > > /* Prepare AAD */ > tls_make_aad(aad, rxm->full_len - prot->overhead_size + > -- > 2.19.1.3.ge56e4f7 > Both backports now queued up, thanks. greg k-h