On Mon, 24 May 2021 at 23:51, Eric Biggers <ebiggers@xxxxxxxxxx> wrote: > > On Fri, May 21, 2021 at 12:20:52PM +0200, Ard Biesheuvel wrote: > > AES/CCM on arm64 is implemented as a synchronous AEAD, and so it is > > guaranteed by the API that it is only invoked in task or softirq > > context. Since softirqs are now only handled when the SIMD is not > > being used in the task context that was interrupted to service the > > softirq, we no longer need a fallback path. Let's remove it. > > > > Signed-off-by: Ard Biesheuvel <ardb@xxxxxxxxxx> > > --- > > arch/arm64/crypto/aes-ce-ccm-core.S | 1 + > > arch/arm64/crypto/aes-ce-ccm-glue.c | 181 ++++++-------------- > > 2 files changed, 53 insertions(+), 129 deletions(-) > > This doesn't just remove the no-SIMD fallback, but it also does some > refactoring. Notably, it starts to process all the authenticated data in one > kernel_neon_begin() / kernel_neon_end() pair rather than many. Can you explain > why that is okay now when previously it wasn't, and also split this into two > separate commits? > OK. For the record, the reason is that, even though kernel_neon_begin/end are reasonably cheap these days, the common case for CCM (given its use in networking context) is for the auth/encrypt/finalize routines to each be called a single time, without any potentially sleeping calls into the skcipher walk layer in between. Now that we are doing more work in there (disable softirq processing as well as preemption), it was a suitable occasion to do some refactoring that I have had on my list for a while now.