On 2020/06/24 14:27, Eric Biggers wrote: > On Wed, Jun 24, 2020 at 05:21:24AM +0000, Damien Le Moal wrote: >>>> @@ -1458,13 +1459,18 @@ static void crypt_alloc_req_skcipher(struct crypt_config *cc, >>>> >>>> skcipher_request_set_tfm(ctx->r.req, cc->cipher_tfm.tfms[key_index]); >>>> >>>> - /* >>>> - * Use REQ_MAY_BACKLOG so a cipher driver internally backlogs >>>> - * requests if driver request queue is full. >>>> - */ >>>> - skcipher_request_set_callback(ctx->r.req, >>>> - CRYPTO_TFM_REQ_MAY_BACKLOG, >>>> - kcryptd_async_done, dmreq_of_req(cc, ctx->r.req)); >>>> + if (test_bit(DM_CRYPT_FORCE_INLINE, &cc->flags)) >>>> + /* make sure we zero important fields of the request */ >>>> + skcipher_request_set_callback(ctx->r.req, >>>> + 0, NULL, NULL); >>>> + else >>>> + /* >>>> + * Use REQ_MAY_BACKLOG so a cipher driver internally backlogs >>>> + * requests if driver request queue is full. >>>> + */ >>>> + skcipher_request_set_callback(ctx->r.req, >>>> + CRYPTO_TFM_REQ_MAY_BACKLOG, >>>> + kcryptd_async_done, dmreq_of_req(cc, ctx->r.req)); >>>> } >>> >>> This looks wrong. Unless type=0 and mask=CRYPTO_ALG_ASYNC are passed to >>> crypto_alloc_skcipher(), the skcipher implementation can still be asynchronous, >>> in which case providing a callback is required. Digging the code further, in light of your hints, it looks like to fix this, all that needs to be done is to change crypt_convert_block_skcipher() from doing: if (bio_data_dir(ctx->bio_in) == WRITE) r = crypto_skcipher_encrypt(req); else r = crypto_skcipher_decrypt(req); to do something like: struct crypto_wait wait; ... if (bio_data_dir(ctx->bio_in) == WRITE) { if (test_bit(DM_CRYPT_FORCE_INLINE_WRITE, &cc->flags)) r = crypto_wait_req(crypto_skcipher_encrypt(req), &wait); else r = crypto_skcipher_encrypt(req); } else { if (test_bit(DM_CRYPT_FORCE_INLINE_READ, &cc->flags)) r = crypto_wait_req(crypto_skcipher_decrypt(req), &wait); else r = crypto_skcipher_decrypt(req); } Doing so, crypt_convert_block_skcipher() cannot return -EBUSY nor -EINPROGRESS for inline IOs, leading to crypt_convert() to see synchronous completions, as expected for inline case. The above likely does not add much overhead at all for synchronous skcipher/accelerators, and handles asynchronous ones as if they were synchronous. Would this be correct ? >>> >>> Do you intend that the "force_inline" option forces the use of a synchronous >>> skcipher (alongside the other things it does)? Or should it still allow >>> asynchronous ones? >>> >>> We may not actually have a choice in that matter, since xts-aes-aesni has the >>> CRYPTO_ALG_ASYNC bit set (as I mentioned) despite being synchronous in most >>> cases; thus, the crypto API won't give you it if you ask for a synchronous >>> cipher. So I think you still need to allow async skciphers? That means a >>> callback is still always required. >> >> Arg... So it means that some skciphers will not be OK at all for SMR writes. I >> was not aware of these differences (tested with aes-xts-plain64 only). The ugly >> way to support async ciphers would be to just wait inline for the crypto API to >> complete using a completion for instance. But that is very ugly. Back to >> brainstorming, and need to learn more about the crypto API... >> > > It's easy to wait for crypto API requests to complete if you need to -- > just use crypto_wait_req(). > > We do this in fs/crypto/, for example. (Not many people are using fscrypt with > crypto API based accelerators, so there hasn't yet been much need to support the > complexity of issuing multiple async crypto requests like dm-crypt supports.) > > - Eric > -- Damien Le Moal Western Digital Research -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel