On Thu, Jul 01, 2021 at 02:58:07PM +0800, lijiazi wrote: > There is a potential race between fuse_read_interrupt and > fuse_dev_do_write contexts as shown below: > > TASK1: > fuse_dev_do_write: > TASK2: > fuse_dev_do_read: > spin_lock(&fiq->lock); > fuse_read_interrupt(); > list_del_init(&req->intr_entry); > fuse_request_end()//now req->intr_entry > //is empty so put this req > TASK3: > fuse_flash: > fuse_simple_request(); > fuse_put_request(); > kmem_cache_free();//free req > > After TASK3 free req, TASK2 access this req in fuse_read_interrupt > and gets below crash: Great report, thanks. > Put list_del_init after the code access req, if intr_entry not > empty, fuse_request_end will wait for fiq->lock, req will not > free before TASK2 unlock fiq->lock. Unfortunately this doesn't reliably work since there's no memory barrier to prevent the compiler or the CPU to rearrange those stores back into a use-after-free configuration. One way to fix that would be to use the list_del_init_careful()/ list_empty_careful() pair, that guarantees ordering. However the below patch is simpler and also works since FR_INTERRUPTED is never cleared, and it's safe to use list_del_init() unconditionally after having locked fiq->lock. Thanks, Miklos diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index 1c8f79b3dd06..dde341a6388a 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -288,10 +288,10 @@ void fuse_request_end(struct fuse_req *req) /* * test_and_set_bit() implies smp_mb() between bit - * changing and below intr_entry check. Pairs with + * changing and below FR_INTERRUPTED check. Pairs with * smp_mb() from queue_interrupt(). */ - if (!list_empty(&req->intr_entry)) { + if (test_bit(FR_INTERRUPTED, &req->flags)) { spin_lock(&fiq->lock); list_del_init(&req->intr_entry); spin_unlock(&fiq->lock);