Hi Sebastian, On Fri, Aug 02, 2024 at 12:23:33PM +0200, Sebastian Andrzej Siewior wrote: > kernel_fpu_begin() disables preemption. gcm_crypt() has a > skcipher_walk_done() invocation within a preempt disabled section. > skcipher_walk_done() can invoke kfree() which requires sleeping locks on > PREEMPT_RT and must not be invoked with disabled preemption. > > Keep FPU access enabled while skcipher_walk_done() is invoked. > > Fixes: b06affb1cb580 ("crypto: x86/aes-gcm - add VAES and AVX512 / AVX10 optimized AES-GCM") > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> > --- > arch/x86/crypto/aesni-intel_glue.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c > index cd37de5ec4046..be92e4c3f9c7f 100644 > --- a/arch/x86/crypto/aesni-intel_glue.c > +++ b/arch/x86/crypto/aesni-intel_glue.c > @@ -1403,7 +1403,9 @@ gcm_crypt(struct aead_request *req, int flags) > aes_gcm_update(key, le_ctr, ghash_acc, > walk.src.virt.addr, walk.dst.virt.addr, > nbytes, flags); > + kernel_fpu_end(); > err = skcipher_walk_done(&walk, 0); > + kernel_fpu_begin(); > /* > * The low word of the counter isn't used by the > * finalize, so there's no need to increment it here. Can you make this conditional on CONFIG_PREEMPT_RT so that it doesn't hurt performance for everyone else? Note that kfree() lacks a might_sleep(), and its kerneldoc does not say that it can sleep. Have you checked for other instances of this same problem? It seems it would be quite common kernel-wide. Is it really necessary that kfree() takes a sleepable lock on PREEMPT_RT? - Eric