On Wed, May 29, 2024 at 08:25:53AM +0800, Jia Jie Ho wrote: > Hardware expects RSA input plain/ciphertext to be 32-bit aligned. > Allocate aligned buffer and shift data accordingly. > > Signed-off-by: Jia Jie Ho <jiajie.ho@xxxxxxxxxxxxxxxx> > --- > drivers/crypto/starfive/jh7110-cryp.h | 3 +-- > drivers/crypto/starfive/jh7110-rsa.c | 17 ++++++++++------- > 2 files changed, 11 insertions(+), 9 deletions(-) > > diff --git a/drivers/crypto/starfive/jh7110-cryp.h b/drivers/crypto/starfive/jh7110-cryp.h > index 494a74f52706..eeb4e2b9655f 100644 > --- a/drivers/crypto/starfive/jh7110-cryp.h > +++ b/drivers/crypto/starfive/jh7110-cryp.h > @@ -217,12 +217,11 @@ struct starfive_cryp_request_ctx { > struct scatterlist *out_sg; > struct ahash_request ahash_fbk_req; > size_t total; > - size_t nents; > unsigned int blksize; > unsigned int digsize; > unsigned long in_sg_len; > unsigned char *adata; > - u8 rsa_data[] __aligned(sizeof(u32)); > + u8 *rsa_data; You didn't explain why this is moving from a pre-allocated buffer to one that's allocated on the run. It would appear that there is no reason why you can't build the extra space used for shifting into reqsize. Cheers, -- Email: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt