On Sun, Jun 04, 2023 at 03:02:32PM -0700, Chang S. Bae wrote: > On 6/4/2023 8:34 AM, Eric Biggers wrote: > > > > To re-iterate what I said on v6, the runtime alignment to a 16-byte boundary > > should happen when translating the raw crypto_skcipher_ctx() into the pointer to > > the aes_xts_ctx. It should not happen when accessing each individual field in > > the aes_xts_ctx. > > > > Yet, this code is still doing runtime alignment when accessing each individual > > field, as the second argument to aes_set_key_common() is 'void *raw_ctx' which > > aes_set_key_common() runtime-aligns to crypto_aes_ctx. > > > > We should keep everything consistent, which means making aes_set_key_common() > > take a pointer to crypto_aes_ctx and not do the runtime alignment. > > Let me clarify what is the problem this patch tried to solve here. The > current struct aesni_xts_ctx is ugly. So, the main story is let's fix it > before using the code for AES-KL. > > Then, the rework part may be applicable for code re-usability. That seems to > be okay to do here. > > Fixing the runtime alignment entirely seems to be touching other code than > AES-XTS. Yes, that's ideal cleanup for consistency. But, it seems to be less > relevant in this series. I'd be happy to follow up on that improvement > though. IMO the issue is that your patch makes the code (including the XTS code) inconsistent because it makes it use a mix of both approaches: it aligns each field individually, *and* it aligns the ctx up-front. I was hoping to switch fully from the former approach to the latter approach, instead of switching from the former approach to a mix of the two approaches as you are proposing. The following on top of this patch is what I am asking for. I think it would be appropriate to fold into this patch. diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 589648142c173..ad1ae7a88b59d 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -228,10 +228,10 @@ static inline struct aesni_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm) return (struct aesni_xts_ctx *)aes_align_addr(crypto_skcipher_ctx(tfm)); } -static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx, +static int aes_set_key_common(struct crypto_tfm *tfm, + struct crypto_aes_ctx *ctx, const u8 *in_key, unsigned int key_len) { - struct crypto_aes_ctx *ctx = aes_ctx(raw_ctx); int err; if (key_len != AES_KEYSIZE_128 && key_len != AES_KEYSIZE_192 && @@ -252,7 +252,8 @@ static int aes_set_key_common(struct crypto_tfm *tfm, void *raw_ctx, static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key, unsigned int key_len) { - return aes_set_key_common(tfm, crypto_tfm_ctx(tfm), in_key, key_len); + return aes_set_key_common(tfm, aes_ctx(crypto_tfm_ctx(tfm)), + in_key, key_len); } static void aesni_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) @@ -285,7 +286,7 @@ static int aesni_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key, unsigned int len) { return aes_set_key_common(crypto_skcipher_tfm(tfm), - crypto_skcipher_ctx(tfm), key, len); + aes_ctx(crypto_skcipher_ctx(tfm)), key, len); } static int ecb_encrypt(struct skcipher_request *req)