On Mon, Dec 19, 2022 at 04:02:13PM -0600, Robert Elliott wrote: > > diff --git a/arch/x86/crypto/sha1_avx2_x86_64_asm.S b/arch/x86/crypto/sha1_avx2_x86_64_asm.S > index c3ee9334cb0f..df03fbb2c42c 100644 > --- a/arch/x86/crypto/sha1_avx2_x86_64_asm.S > +++ b/arch/x86/crypto/sha1_avx2_x86_64_asm.S > @@ -58,9 +58,9 @@ > /* > * SHA-1 implementation with Intel(R) AVX2 instruction set extensions. > * > - *This implementation is based on the previous SSSE3 release: > - *Visit http://software.intel.com/en-us/articles/ > - *and refer to improving-the-performance-of-the-secure-hash-algorithm-1/ > + * This implementation is based on the previous SSSE3 release: > + * Visit http://software.intel.com/en-us/articles/ > + * and refer to improving-the-performance-of-the-secure-hash-algorithm-1/ Could you please leave out changes which are not related to the main purpose of this patch? Put them into a separate patch if necessary. > diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c > index 44340a1139e0..b269b455fbbe 100644 > --- a/arch/x86/crypto/sha1_ssse3_glue.c > +++ b/arch/x86/crypto/sha1_ssse3_glue.c > @@ -41,9 +41,7 @@ static int sha1_update(struct shash_desc *desc, const u8 *data, > */ > BUILD_BUG_ON(offsetof(struct sha1_state, state) != 0); > > - kernel_fpu_begin(); > sha1_base_do_update(desc, data, len, sha1_xform); > - kernel_fpu_end(); Moving kernel_fpu_begin/kernel_fpu_end down seems to be entirely unnecessary as you could already call kernel_fpu_yield deep down the stack with the current code. Thanks, -- Email: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt