On Mon, Dec 19, 2022 at 02:37:32PM -0600, Robert Elliott wrote: > Add crypto_yield() calls at the end of all the encrypt and decrypt > functions to let the scheduler use the CPU after possibly a long > tenure by the crypto driver. > > This reduces RCU stalls and soft lockups when running crypto > functions back-to-back that don't have their own yield calls > (e.g., aligned generic functions). > > Signed-off-by: Robert Elliott <elliott@xxxxxxx> > --- > crypto/aead.c | 4 ++++ > crypto/shash.c | 32 ++++++++++++++++++++++++-------- > 2 files changed, 28 insertions(+), 8 deletions(-) > > diff --git a/crypto/aead.c b/crypto/aead.c > index 16991095270d..f88378f4d4f5 100644 > --- a/crypto/aead.c > +++ b/crypto/aead.c > @@ -93,6 +93,8 @@ int crypto_aead_encrypt(struct aead_request *req) > else > ret = crypto_aead_alg(aead)->encrypt(req); > crypto_stats_aead_encrypt(cryptlen, alg, ret); > + > + crypto_yield(crypto_aead_get_flags(aead)); This is the wrong place to do it. It should be done by the code that's actually doing the work, just like skcipher. > diff --git a/crypto/shash.c b/crypto/shash.c > index 868b6ba2b3b7..6fea17a50048 100644 > --- a/crypto/shash.c > +++ b/crypto/shash.c > @@ -114,11 +114,15 @@ int crypto_shash_update(struct shash_desc *desc, const u8 *data, > struct crypto_shash *tfm = desc->tfm; > struct shash_alg *shash = crypto_shash_alg(tfm); > unsigned long alignmask = crypto_shash_alignmask(tfm); > + int ret; > > if ((unsigned long)data & alignmask) > - return shash_update_unaligned(desc, data, len); > + ret = shash_update_unaligned(desc, data, len); > + else > + ret = shash->update(desc, data, len); > > - return shash->update(desc, data, len); > + crypto_yield(crypto_shash_get_flags(tfm)); > + return ret; > } > EXPORT_SYMBOL_GPL(crypto_shash_update); Ditto. Cheers, -- Email: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt