On Wednesday 08 July 2015 09:48 AM, Herbert Xu wrote: > On Tue, Jul 07, 2015 at 09:01:48PM +0530, Lokesh Vutla wrote: >> >> +static int omap_aes_gcm_copy_buffers(struct omap_aes_dev *dd, >> + struct aead_request *req) >> +{ >> + void *buf_in; >> + int pages, alen, clen, cryptlen, nsg; >> + struct crypto_aead *aead = crypto_aead_reqtfm(req); >> + unsigned int authlen = crypto_aead_authsize(aead); >> + u32 dec = !(dd->flags & FLAGS_ENCRYPT); >> + struct scatterlist *input, *assoc, tmp[2]; >> + >> + alen = ALIGN(req->assoclen, AES_BLOCK_SIZE); >> + cryptlen = req->cryptlen - (dec * authlen); >> + clen = ALIGN(cryptlen, AES_BLOCK_SIZE); >> + >> + dd->sgs_copied = 0; >> + >> + nsg = !!(req->assoclen && req->cryptlen); >> + >> + assoc = &req->src[0]; >> + sg_init_table(dd->in_sgl, nsg + 1); >> + if (req->assoclen) { >> + if (omap_aes_check_aligned(assoc, req->assoclen)) { >> + dd->sgs_copied |= AES_ASSOC_DATA_COPIED; >> + pages = get_order(alen); >> + buf_in = (void *)__get_free_pages(GFP_ATOMIC, pages); >> + if (!buf_in) { >> + pr_err("Couldn't allocate for unaligncases.\n"); >> + return -1; >> + } >> + >> + scatterwalk_map_and_copy(buf_in, assoc, 0, >> + req->assoclen, 0); >> + memset(buf_in + req->assoclen, 0, alen - req->assoclen); >> + } else { >> + buf_in = sg_virt(req->assoc); > > req->assoc is now obsolete. Did you test this code? Sorry, I missed it. Ill update. > >> +static int do_encrypt_iv(struct aead_request *req, u32 *tag) >> +{ >> + struct scatterlist iv_sg; >> + struct ablkcipher_request *ablk_req; >> + struct crypto_ablkcipher *tfm; >> + struct tcrypt_result result; >> + struct omap_aes_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req)); >> + int ret = 0; >> + >> + tfm = crypto_alloc_ablkcipher("ctr(aes)", 0, 0); > > Ugh, you cannot allocate crypto transforms in the data path. You > should allocate it in init instead. Also using ctr(aes) is overkill. > Just use aes and do the xor by hand. Ill take care of this. > >> +static int omap_aes_gcm_crypt(struct aead_request *req, unsigned long mode) >> +{ >> + struct omap_aes_ctx *ctx = crypto_aead_ctx(crypto_aead_reqtfm(req)); >> + struct omap_aes_reqctx *rctx = aead_request_ctx(req); >> + struct crypto_aead *aead = crypto_aead_reqtfm(req); >> + unsigned int authlen = crypto_aead_authsize(aead); >> + struct omap_aes_dev *dd; >> + __be32 counter = cpu_to_be32(1); >> + int err; >> + >> + memset(ctx->auth_tag, 0, sizeof(ctx->auth_tag)); > > The ctx is shared memory and you must not write to it as multiple > requests can be called on the same tfm. Use rctx instead. > >> + memcpy(req->iv + 12, &counter, 4); > > The IV is only 12 bytes long so you're corrupting memory here. > You should use rctx here too. Ok, Ill use rctx. Thanks for pointing. > >> + if (req->assoclen + req->cryptlen == 0) { >> + scatterwalk_map_and_copy(ctx->auth_tag, req->dst, 0, authlen, >> + 1); >> + return 0; >> + } > > How can this be right? Did you enable the selftest? Why not? Self tests are passed for this case. As per the equation given in GCM spec[1], we can see that if assoclen and cryptlen is 0, then output of GCM is just E(K, Y0) where Y0 = IV||(0^31)1 I have E(K, Y0) calculated in previous step. And copying it to destination if assoclen and cryptlen is 0. Correct me if I am wrong. Thanks and regards, Lokesh [1] http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/gcm/gcm-revised-spec.pdf > > Cheers, > -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html