[PATCH] crypto: Fix next IV issue for CTS template

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Libo Wang <Libo Wang@xxxxxxx>

CTS template assumes underlying CBC algorithm will carry out next IV for
further process.But some implementations of CBC algorithm in kernel break
this assumption, for example, some hardware crypto drivers ignore next IV
for performance consider, inthis case, tcry cts(cbc(aes)) test case will
fail. This patch is trying to fix it by getting next IV information ready
before last two blocks processed.

Signed-off-by: Libo Wang <libo.wang@xxxxxxx>
Signed-off-by: Dennis Chen <dennis.chen@xxxxxxx>
---
 crypto/cts.c | 29 +++++++++++++++++++++++++----
 1 file changed, 25 insertions(+), 4 deletions(-)

diff --git a/crypto/cts.c b/crypto/cts.c
index a1335d6..712164b 100644
--- a/crypto/cts.c
+++ b/crypto/cts.c
@@ -154,6 +154,7 @@ static int crypto_cts_encrypt(struct skcipher_request *req)
        unsigned int nbytes = req->cryptlen;
        int cbc_blocks = (nbytes + bsize - 1) / bsize - 1;
        unsigned int offset;
+       int ret = 0;

        skcipher_request_set_tfm(subreq, ctx->child);

@@ -174,8 +175,17 @@ static int crypto_cts_encrypt(struct skcipher_request *req)
        skcipher_request_set_crypt(subreq, req->src, req->dst,
                                   offset, req->iv);

-       return crypto_skcipher_encrypt(subreq) ?:
-              cts_cbc_encrypt(req);
+       /* process CBC blocks */
+       ret = crypto_skcipher_encrypt(subreq);
+       /* process last two blocks */
+       if (!ret) {
+               /* Get IVn-1 back */
+               scatterwalk_map_and_copy(req->iv, req->dst, (offset - bsize), bsize, 0);
+               /* Continue last two blocks */
+               return cts_cbc_encrypt(req);
+       }
+
+       return ret;
 }

 static int cts_cbc_decrypt(struct skcipher_request *req)
@@ -248,6 +258,8 @@ static int crypto_cts_decrypt(struct skcipher_request *req)
        int cbc_blocks = (nbytes + bsize - 1) / bsize - 1;
        unsigned int offset;
        u8 *space;
+       int ret = 0;
+       u8 iv_next[bsize];

        skcipher_request_set_tfm(subreq, ctx->child);

@@ -277,8 +289,17 @@ static int crypto_cts_decrypt(struct skcipher_request *req)
        skcipher_request_set_crypt(subreq, req->src, req->dst,
                                   offset, req->iv);

-       return crypto_skcipher_decrypt(subreq) ?:
-              cts_cbc_decrypt(req);
+       /* process last two blocks */
+       scatterwalk_map_and_copy(iv_next, req->src, (offset - bsize), bsize, 0);
+       ret = crypto_skcipher_decrypt(subreq);
+       if (!ret) {
+               /* Set Next IV */
+               subreq->iv = iv_next;
+               /* process last two blocks */
+               return cts_cbc_decrypt(req);
+       }
+
+       return ret;
 }

 static int crypto_cts_init_tfm(struct crypto_skcipher *tfm)
--
1.8.3.1

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux