On 2023-11-28 07:22, Herbert Xu wrote:
When processing the last block, the s390 ctr code will always read
a whole block, even if there isn't a whole block of data left. Fix
this by using the actual length left and copy it into a buffer first
for processing.
Fixes: 0200f3ecc196 ("crypto: s390 - add System z hardware support for
CTR mode")
Cc: <stable@xxxxxxxxxxxxxxx>
Reported-by: Guangwu Zhang <guazhang@xxxxxxxxxx>
Signed-off-by: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
index c773820e4af9..c6fe5405de4a 100644
--- a/arch/s390/crypto/aes_s390.c
+++ b/arch/s390/crypto/aes_s390.c
@@ -597,7 +597,9 @@ static int ctr_aes_crypt(struct skcipher_request
*req)
* final block may be < AES_BLOCK_SIZE, copy only nbytes
*/
if (nbytes) {
- cpacf_kmctr(sctx->fc, sctx->key, buf, walk.src.virt.addr,
+ memset(buf, 0, AES_BLOCK_SIZE);
+ memcpy(buf, walk.src.virt.addr, nbytes);
+ cpacf_kmctr(sctx->fc, sctx->key, buf, buf,
AES_BLOCK_SIZE, walk.iv);
memcpy(walk.dst.virt.addr, buf, nbytes);
crypto_inc(walk.iv, AES_BLOCK_SIZE);
Here is a similar fix for the s390 paes ctr cipher. Compiles and is
tested. You may merge this with your patch for the s390 aes cipher.
--------------------------------------------------------------------------------
diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
index 8b541e44151d..55ee5567a5ea 100644
--- a/arch/s390/crypto/paes_s390.c
+++ b/arch/s390/crypto/paes_s390.c
@@ -693,9 +693,11 @@ static int ctr_paes_crypt(struct skcipher_request
*req)
* final block may be < AES_BLOCK_SIZE, copy only nbytes
*/
if (nbytes) {
+ memset(buf, 0, AES_BLOCK_SIZE);
+ memcpy(buf, walk.src.virt.addr, nbytes);
while (1) {
if (cpacf_kmctr(ctx->fc, ¶m, buf,
- walk.src.virt.addr,
AES_BLOCK_SIZE,
+ buf, AES_BLOCK_SIZE,
walk.iv) == AES_BLOCK_SIZE)
break;
if (__paes_convert_key(ctx))