[ TL;DR for the non-ARM folks on CC: disabling softirq processing when using SIMD in kernel mode could reduce complexity and improve performance, but we need to decide whether we can do this, and how much softirq processing latency we can tolerate. If we can find a satisfactory solution for this, we might do the same for x86 and 32-bit ARM as well. ] The crypto API provides two ways to invoke symmetric encryption algorithms: - synchronously, where the transformation is guaranteed to be done by the time the function returns; - asynchronously, where the function may return with a -EINPROGRESS return code, and a completion will be signalled when the transformation is done. The latter is mainly intended for h/w accelerators, where the throughput would be severely limited by the latency otherwise. However, it is also being used for software algorithms based on SIMD instructions, which cannot be issued from any context (the rules are not the same on each architecture, but typically, SIMD can be used in task context, or in softirq context if it was not taken while the SIMD was already in use in kernel mode). Many users of the crypto API exist in the kernel today that opt out of this asynchronous interface (802.11, macsec, kerberos, sw kTLS), or use a library interface which is fundamentally synchronous (wireguard). This means we end up using a degraded mode for the contended case (a scalar fallback) as well as the uncontended case (generic GCM/CCM/CTR chaining mode templates wrapped around the SIMD cipher as opposed to accelerated implementations of the full chaining modes in question). Note that scalar AES runs ~20x slower than the SIMD instruction based version. So let's address this for arm64, by reorganizing kernel mode SIMD support so that the SIMD unit can always be assumed to be available. This means we need to defer softirq processing when grabbing the NEON unit in task context, so that any use of it in softirq context is guaranteed not to interrupt any code that was already using the NEON. This obviously impacts softirq processing latency, which is why the existing conditional NEON yield support is modified to take pending softirqs into account. As an example of how this impacts the code, the existing arm64 GCM driver is updated to: - Add yield support - currently, the pending softirq check is performed every 64 bytes of input, which is way too often - one of the desired outcomes of this RFC is getting a reasonable ballpark for how long we want to run with softirqs disabled. - Remove the existing scalar fallbacks, which are no longer needed. Questions: - what did I miss or break horribly? - does any of this matter for RT? AIUI, RT runs softirqs from a dedicated kthread, so I don't think it cares. - what would be a reasonable upper bound to keep softirqs disabled? I suppose 100s of cycles or less is overkill, but I'm not sure how to derive a better answer. - could we do the same on x86, now that kernel_fpu_begin/end is no longer expensive? Cc: Dave Martin <dave.martin@xxxxxxx> Cc: Mark Brown <broonie@xxxxxxxxxx> Cc: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Cc: Eric Biggers <ebiggers@xxxxxxxxxx> Cc: Will Deacon <will@xxxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Ard Biesheuvel (5): crypto: aead - disallow en/decrypt for non-task or non-softirq context crypto: skcipher - disallow en/decrypt for non-task or non-softirq context crypto: arm64/gcm-aes-ce - add NEON yield support arm64: fpsimd: run kernel mode NEON with softirqs disabled crypto: arm64/gcm-aes-ce - remove non-SIMD fallback path arch/arm64/crypto/ghash-ce-core.S | 115 ++++++----- arch/arm64/crypto/ghash-ce-glue.c | 209 +++++--------------- arch/arm64/include/asm/assembler.h | 19 +- arch/arm64/kernel/asm-offsets.c | 2 + arch/arm64/kernel/fpsimd.c | 4 +- crypto/aead.c | 10 + crypto/skcipher.c | 10 + 7 files changed, 155 insertions(+), 214 deletions(-) -- 2.17.1