On Fri, 25 Dec 2020 at 20:14, Eric Biggers <ebiggers@xxxxxxxxxx> wrote: > > On Tue, Dec 22, 2020 at 05:06:27PM +0100, Ard Biesheuvel wrote: > > The AES-NI implementation of XTS was impacted significantly by the retpoline > > changes, which is due to the fact that both its asm helper and the chaining > > mode glue library use indirect calls for processing small quantitities of > > data > > > > So let's fix this, by: > > - creating a minimal, backportable fix that recovers most of the performance, > > by reducing the number of indirect calls substantially; > > - for future releases, rewrite the XTS implementation completely, and replace > > the glue helper with a core asm routine that is more flexible, making the C > > code wrapper much more straight-forward. > > > > This results in a substantial performance improvement: around ~2x for 1k and > > 4k blocks, and more than 3x for ~1k blocks that require ciphertext stealing > > (benchmarked using tcrypt using 1420 byte blocks - full results below) > > > > It also allows us to enable the same driver for i386. > > > > Cc: Megha Dey <megha.dey@xxxxxxxxx> > > Cc: Eric Biggers <ebiggers@xxxxxxxxxx> > > Cc: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> > > > > Ard Biesheuvel (2): > > crypto: x86/aes-ni-xts - use direct calls to and 4-way stride > > crypto: x86/aes-ni-xts - rewrite and drop indirections via glue helper > > > > arch/x86/crypto/aesni-intel_asm.S | 353 ++++++++++++++++---- > > arch/x86/crypto/aesni-intel_glue.c | 230 +++++++------ > > 2 files changed, 412 insertions(+), 171 deletions(-) > > > > -- > > 2.17.1 > > > > Benchmarked using tcrypt on a Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz. > > Thanks for doing this! I didn't realize that there was such a big performance > regression here. Getting rid of these indirect calls looks like the right > approach; this all seems to have been written for a world where indirect calls > are much faster... > > I did some quick benchmarks on Zen ("AMD Ryzen Threadripper 1950X 16-Core > Processor") with CONFIG_RETPOLINE=y and confirmed the speedup on 4096-byte > blocks is around 2x there too. (It's over 2x for AES-128-XTS and AES-192-XTS, > and a bit under 2x for AES-256-XTS. And most of the speedup comes from the > first patch.) Also, the extra self-tests are passing. > > So feel free to add: > > Tested-by: Eric Biggers <ebiggers@xxxxxxxxxx> # x86_64 > > Note that this patch series didn't apply cleanly, as it seems to depend on some > other patches you've sent out recently. So I actually tested your > "for-kernelci" branch instead of applying these directly. > Thanks Eric. I have some other stuff queued up locally as well, so there are some non-functional conflicts there. The only prerequisite for this series is the one that adds CTS-CBC support to AES-NI, give that the XTS implementation reuses the permute table. I will rebase and resend.