From: Eric Biggers <ebiggers@xxxxxxxxxx> commit 4a8108b70508df0b6c4ffa4a3974dab93dcbe851 upstream. If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. xts-aes-neonbs doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and unconditionally accessing walk.iv has caused a real problem in other algorithms. Thus, update xts-aes-neonbs to start checking the return value of skcipher_walk_virt(). Fixes: 1abee99eafab ("crypto: arm64/aes - reimplement bit-sliced ARM/NEON implementation for arm64") Cc: <stable@xxxxxxxxxxxxxxx> # v4.11+ Signed-off-by: Eric Biggers <ebiggers@xxxxxxxxxx> Signed-off-by: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/arm64/crypto/aes-neonbs-glue.c | 2 ++ 1 file changed, 2 insertions(+) --- a/arch/arm64/crypto/aes-neonbs-glue.c +++ b/arch/arm64/crypto/aes-neonbs-glue.c @@ -307,6 +307,8 @@ static int __xts_crypt(struct skcipher_r int err; err = skcipher_walk_virt(&walk, req, true); + if (err) + return err; kernel_neon_begin();