This is a note to let you know that I've just added the patch titled ubi: Fix UAF wear-leveling entry in eraseblk_count_seq_show() to the 5.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: ubi-fix-uaf-wear-leveling-entry-in-eraseblk_count_se.patch and it can be found in the queue-5.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 6f2c367e2277255ed0ae21fa89baa499db0488a9 Author: Zhihao Cheng <chengzhihao1@xxxxxxxxxx> Date: Sat Jul 30 19:28:37 2022 +0800 ubi: Fix UAF wear-leveling entry in eraseblk_count_seq_show() [ Upstream commit a240bc5c43130c6aa50831d7caaa02a1d84e1bce ] Wear-leveling entry could be freed in error path, which may be accessed again in eraseblk_count_seq_show(), for example: __erase_worker eraseblk_count_seq_show wl = ubi->lookuptbl[*block_number] if (wl) wl_entry_destroy ubi->lookuptbl[e->pnum] = NULL kmem_cache_free(ubi_wl_entry_slab, e) erase_count = wl->ec // UAF! Wear-leveling entry updating/accessing in ubi->lookuptbl should be protected by ubi->wl_lock, fix it by adding ubi->wl_lock to serialize wl entry accessing between wl_entry_destroy() and eraseblk_count_seq_show(). Fetch a reproducer in [Link]. Link: https://bugzilla.kernel.org/show_bug.cgi?id=216305 Fixes: 7bccd12d27b7e3 ("ubi: Add debugfs file for tracking PEB state") Fixes: 801c135ce73d5d ("UBI: Unsorted Block Images") Signed-off-by: Zhihao Cheng <chengzhihao1@xxxxxxxxxx> Signed-off-by: Richard Weinberger <richard@xxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c index 820b5c1c8e8e7..7406bc96affb5 100644 --- a/drivers/mtd/ubi/wl.c +++ b/drivers/mtd/ubi/wl.c @@ -885,8 +885,11 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk, err = do_sync_erase(ubi, e1, vol_id, lnum, 0); if (err) { - if (e2) + if (e2) { + spin_lock(&ubi->wl_lock); wl_entry_destroy(ubi, e2); + spin_unlock(&ubi->wl_lock); + } goto out_ro; } @@ -1121,14 +1124,18 @@ static int __erase_worker(struct ubi_device *ubi, struct ubi_work *wl_wrk) /* Re-schedule the LEB for erasure */ err1 = schedule_erase(ubi, e, vol_id, lnum, 0, false); if (err1) { + spin_lock(&ubi->wl_lock); wl_entry_destroy(ubi, e); + spin_unlock(&ubi->wl_lock); err = err1; goto out_ro; } return err; } + spin_lock(&ubi->wl_lock); wl_entry_destroy(ubi, e); + spin_unlock(&ubi->wl_lock); if (err != -EIO) /* * If this is not %-EIO, we have no idea what to do. Scheduling