Re: [PATCH v2 05/13] crypto: simd - Update `walksize` in simd skcipher

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Nov 28, 2023, at 11:58, Eric Biggers <ebiggers@xxxxxxxxxx> wrote:
> On Mon, Nov 27, 2023 at 03:06:55PM +0800, Jerry Shih wrote:
>> The `walksize` assignment is missed in simd skcipher.
>> 
>> Signed-off-by: Jerry Shih <jerry.shih@xxxxxxxxxx>
>> ---
>> crypto/cryptd.c | 1 +
>> crypto/simd.c   | 1 +
>> 2 files changed, 2 insertions(+)
>> 
>> diff --git a/crypto/cryptd.c b/crypto/cryptd.c
>> index bbcc368b6a55..253d13504ccb 100644
>> --- a/crypto/cryptd.c
>> +++ b/crypto/cryptd.c
>> @@ -405,6 +405,7 @@ static int cryptd_create_skcipher(struct crypto_template *tmpl,
>> 		(alg->base.cra_flags & CRYPTO_ALG_INTERNAL);
>> 	inst->alg.ivsize = crypto_skcipher_alg_ivsize(alg);
>> 	inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
>> +	inst->alg.walksize = crypto_skcipher_alg_walksize(alg);
>> 	inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg);
>> 	inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg);
>> 
>> diff --git a/crypto/simd.c b/crypto/simd.c
>> index edaa479a1ec5..ea0caabf90f1 100644
>> --- a/crypto/simd.c
>> +++ b/crypto/simd.c
>> @@ -181,6 +181,7 @@ struct simd_skcipher_alg *simd_skcipher_create_compat(const char *algname,
>> 
>> 	alg->ivsize = ialg->ivsize;
>> 	alg->chunksize = ialg->chunksize;
>> +	alg->walksize = ialg->walksize;
>> 	alg->min_keysize = ialg->min_keysize;
>> 	alg->max_keysize = ialg->max_keysize;
> 
> What are the consequences of this bug?  I wonder if it actually matters?  The
> "inner" algorithm is the one that actually gets used for the "walk", right?
> 
> - Eric

Without this, we might still use chunksize or cra_blocksize as the walksize
even though we setup with the larger walksize.

Here is the code for the walksize default value:
	static int skcipher_prepare_alg(struct skcipher_alg *alg)
	{
		...
		if (!alg->chunksize)
			alg->chunksize = base->cra_blocksize;
		if (!alg->walksize)
			alg->walksize = alg->chunksize;

And we already have the bigger walksize for x86 aes-xts.
		.base = {
			.cra_name		= "__xts(aes)",
			...
		},
		.walksize	= 2 * AES_BLOCK_SIZE,

The x86 aes-xts only uses one `walk` to handle the tail elements. It assumes
that the walksize contains 2 aes blocks. If walksize is not set correctly, maybe
some tail elements is not processed in simd-cipher mode for x86 aes-xts.

-Jerry




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux