Hi Elliott and Dave,
Thanks a lot for the reviews!
On 11/6/22 02:31, Dave Hansen wrote:
> On 11/5/22 09:20, Elliott, Robert (Servers) wrote:
>> --- a/arch/x86/crypto/aesni-intel_glue.c
>> +++ b/arch/x86/crypto/aesni-intel_glue.c
>> @@ -288,6 +288,10 @@ static int aes_set_key_common(struct crypto_tfm
*tfm, void *raw_ctx,
>> struct crypto_aes_ctx *ctx = aes_ctx(raw_ctx);
>> int err;
>>
>> + BUILD_BUG_ON(offsetof(struct crypto_aes_ctx, key_enc) != 0);
>> + BUILD_BUG_ON(offsetof(struct crypto_aes_ctx, key_dec) != 240);
>> + BUILD_BUG_ON(offsetof(struct crypto_aes_ctx, key_length) !=
480);
>
> We have a nice fancy way of doing these. See things like
> CPU_ENTRY_AREA_entry_stack or TSS_sp0. It's all put together from
> arch/x86/kernel/asm-offsets.c and gets plopped in
> include/generated/asm-offsets.h.
>
> This is vastly preferred to hard-coded magic number offsets, even if
> they do have a BUILD_BUG_ON() somewhere.
I will define ARIA_CTX_xxx with asm-offsets.c.
Then, the assembly code can use the correct offset of enc_key, dec_key,
and the rounds in the struct aria_ctx.
Due to we can sure the offsets are correct, BUILD_BUG_ON() will become
unnecessary.
I will send the v3 patch.
Thanks a lot!
Taehee Yoo