Re: [PATCH] crypto: x86/aes-gcm: Disable FPU around skcipher_walk_done().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 02, 2024 at 09:28:32AM -0700, Eric Biggers wrote:
> Hi Sebastian,
> 
> On Fri, Aug 02, 2024 at 12:23:33PM +0200, Sebastian Andrzej Siewior wrote:
> > kernel_fpu_begin() disables preemption. gcm_crypt() has a
> > skcipher_walk_done() invocation within a preempt disabled section.
> > skcipher_walk_done() can invoke kfree() which requires sleeping locks on
> > PREEMPT_RT and must not be invoked with disabled preemption.
> > 
> > Keep FPU access enabled while skcipher_walk_done() is invoked.
> > 
> > Fixes: b06affb1cb580 ("crypto: x86/aes-gcm - add VAES and AVX512 / AVX10 optimized AES-GCM")
> > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
> > ---
> >  arch/x86/crypto/aesni-intel_glue.c | 2 ++
> >  1 file changed, 2 insertions(+)
> > 
> > diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
> > index cd37de5ec4046..be92e4c3f9c7f 100644
> > --- a/arch/x86/crypto/aesni-intel_glue.c
> > +++ b/arch/x86/crypto/aesni-intel_glue.c
> > @@ -1403,7 +1403,9 @@ gcm_crypt(struct aead_request *req, int flags)
> >  			aes_gcm_update(key, le_ctr, ghash_acc,
> >  				       walk.src.virt.addr, walk.dst.virt.addr,
> >  				       nbytes, flags);
> > +			kernel_fpu_end();
> >  			err = skcipher_walk_done(&walk, 0);
> > +			kernel_fpu_begin();
> >  			/*
> >  			 * The low word of the counter isn't used by the
> >  			 * finalize, so there's no need to increment it here.
> 
> Can you make this conditional on CONFIG_PREEMPT_RT so that it doesn't hurt
> performance for everyone else?
> 
> Note that kfree() lacks a might_sleep(), and its kerneldoc does not say that it
> can sleep.  Have you checked for other instances of this same problem?  It seems
> it would be quite common kernel-wide.  Is it really necessary that kfree() takes
> a sleepable lock on PREEMPT_RT?
> 

This would work too, I think:

diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index cd37de5ec4046..2d6bcf7fc7c51 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -1401,11 +1401,12 @@ gcm_crypt(struct aead_request *req, int flags)
 		} else {
 			/* Last segment: process all remaining data. */
 			aes_gcm_update(key, le_ctr, ghash_acc,
 				       walk.src.virt.addr, walk.dst.virt.addr,
 				       nbytes, flags);
-			err = skcipher_walk_done(&walk, 0);
+			err = 0;
+			break;
 			/*
 			 * The low word of the counter isn't used by the
 			 * finalize, so there's no need to increment it here.
 			 */
 		}
@@ -1439,10 +1440,12 @@ gcm_crypt(struct aead_request *req, int flags)
 				       datalen, tag, taglen, flags))
 			err = -EBADMSG;
 	}
 out:
 	kernel_fpu_end();
+	if (nbytes)
+		skcipher_walk_done(&walk, 0);
 	return err;
 }




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux