Thanks for catching this. How about adding a Fixes tag.
Reviewed-by: Jia Zhu <zhujia.zj@xxxxxxxxxxxxx>
在 2024/4/24 11:34, libaokun@xxxxxxxxxxxxxxx 写道:
From: Hou Tao <houtao1@xxxxxxxxxx>
When queuing ondemand_object_worker() to re-open the object,
cachefiles_object is not pinned. The cachefiles_object may be freed when
the pending read request is completed intentionally and the related
erofs is umounted. If ondemand_object_worker() runs after the object is
freed, it will incur use-after-free problem as shown below.
process A processs B process C process D
cachefiles_ondemand_send_req()
// send a read req X
// wait for its completion
// close ondemand fd
cachefiles_ondemand_fd_release()
// set object as CLOSE
cachefiles_ondemand_daemon_read()
// set object as REOPENING
queue_work(fscache_wq, &info->ondemand_work)
// close /dev/cachefiles
cachefiles_daemon_release
cachefiles_flush_reqs
complete(&req->done)
// read req X is completed
// umount the erofs fs
cachefiles_put_object()
// object will be freed
cachefiles_ondemand_deinit_obj_info()
kmem_cache_free(object)
// both info and object are freed
ondemand_object_worker()
When dropping an object, it is no longer necessary to reopen the object,
so use cancel_work_sync() to cancel or wait for ondemand_object_worker()
to complete.
Signed-off-by: Hou Tao <houtao1@xxxxxxxxxx>
Signed-off-by: Baokun Li <libaokun1@xxxxxxxxxx>
---
fs/cachefiles/ondemand.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c
index d24bff43499b..f6440b3e7368 100644
--- a/fs/cachefiles/ondemand.c
+++ b/fs/cachefiles/ondemand.c
@@ -589,6 +589,9 @@ void cachefiles_ondemand_clean_object(struct cachefiles_object *object)
}
}
xa_unlock(&cache->reqs);
+
+ /* Wait for ondemand_object_worker() to finish to avoid UAF. */
+ cancel_work_sync(&object->ondemand->ondemand_work);
}
int cachefiles_ondemand_init_obj_info(struct cachefiles_object *object,
To unsubscribe from this group and stop receiving emails from it, send an email to linux-cachefs+unsubscribe@xxxxxxxxxx.