On 2024-08-02 09:28:32 [-0700], Eric Biggers wrote: > Hi Sebastian, Hi Eric, > > diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c > > index cd37de5ec4046..be92e4c3f9c7f 100644 > > --- a/arch/x86/crypto/aesni-intel_glue.c > > +++ b/arch/x86/crypto/aesni-intel_glue.c > > @@ -1403,7 +1403,9 @@ gcm_crypt(struct aead_request *req, int flags) > > aes_gcm_update(key, le_ctr, ghash_acc, > > walk.src.virt.addr, walk.dst.virt.addr, > > nbytes, flags); > > + kernel_fpu_end(); > > err = skcipher_walk_done(&walk, 0); > > + kernel_fpu_begin(); > > /* > > * The low word of the counter isn't used by the > > * finalize, so there's no need to increment it here. > > Can you make this conditional on CONFIG_PREEMPT_RT so that it doesn't hurt > performance for everyone else? Every other instance in this file had a kernel_fpu_end/ begin() before skcipher_walk_done() so I though was just missed by chance. > Note that kfree() lacks a might_sleep(), and its kerneldoc does not say that it > can sleep. Have you checked for other instances of this same problem? It seems > it would be quite common kernel-wide. kfree() can't have a might_sleep() because it does not qualify for this since you can use it in softirq context for instance with an acquired spinlockt_t on !RT which would trigger it. On PREEMPT_RT interrupts are threaded, softirq is preemptible, spintlock_t is a sleeping lock so all these things where a kfree() would have been invoked in preempt-disable context on !PREEMPT_RT is actually preemptible on PREEMPT_RT. This is of course not true in cases where preemption is explicitly disabled like in this case. > Is it really necessary that kfree() takes > a sleepable lock on PREEMPT_RT? Yes. The locking in SLUB and page allocator is spinlock_t. > - Eric Sebastian