On Thu, Dec 26, 2024 at 10:30:49PM +0530, Atharva Tiwari wrote: > The `vmac_update` function previously assumed that `p` was aligned, > which could lead to misaligned memory accesses when processing blocks. > This patch resolves the issue by, > introducing a temporary buffer to ensure alignment. > > Changes include: > - Allocating a temporary buffer (`__le64 *data`) to store aligned blocks. > - Using `get_unaligned_le64` to safely read data into the temporary buffer. > - Iteratively processing blocks with the `vhash_blocks` function. > - Properly freeing the allocated temporary buffer after processing. > > Signed-off-by: Atharva Tiwari <evepolonium@xxxxxxxxx> > --- > crypto/vmac.c | 16 +++++++++++++--- > 1 file changed, 13 insertions(+), 3 deletions(-) > > diff --git a/crypto/vmac.c b/crypto/vmac.c > index 2ea384645ecf..513fbd5bc581 100644 > --- a/crypto/vmac.c > +++ b/crypto/vmac.c > @@ -518,9 +518,19 @@ static int vmac_update(struct shash_desc *desc, const u8 *p, unsigned int len) > > if (len >= VMAC_NHBYTES) { > n = round_down(len, VMAC_NHBYTES); > - /* TODO: 'p' may be misaligned here */ > - vhash_blocks(tctx, dctx, (const __le64 *)p, n / VMAC_NHBYTES); > - p += n; > + const u8 *end = p + n; > + const uint16_t num_blocks = VMAC_NHBYTES/sizeof(__le64); > + __le64 *data = kmalloc(num_blocks * sizeof(__le64), GFP_KERNEL); > + > + while (p < end) { > + for (unsigned short i = 0; i < num_blocks; i++) { > + data[i] = get_unaligned_le64(p + i * sizeof(__le64)); > + } This is not what I meant by using get_unaligned_le64. I meant replacing the actual 64-bit accesses within vhash_blocks with get_unaligned_le64. Cheers, -- Email: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt