On Thu, 8 Feb 2024 at 18:09, Amir Goldstein <amir73il@xxxxxxxxx> wrote: > static int fuse_inode_get_io_cache(struct fuse_inode *fi) > { > + int err = 0; > + > assert_spin_locked(&fi->lock); > - if (fi->iocachectr < 0) > - return -ETXTBSY; > - if (fi->iocachectr++ == 0) > - set_bit(FUSE_I_CACHE_IO_MODE, &fi->state); > - return 0; > + /* > + * Setting the bit advises new direct-io writes to use an exclusive > + * lock - without it the wait below might be forever. > + */ > + set_bit(FUSE_I_CACHE_IO_MODE, &fi->state); > + while (!err && fuse_is_io_cache_wait(fi)) { > + spin_unlock(&fi->lock); > + err = wait_event_killable(fi->direct_io_waitq, > + !fuse_is_io_cache_wait(fi)); > + spin_lock(&fi->lock); > + } > + /* > + * Enter caching mode or clear the FUSE_I_CACHE_IO_MODE bit if we > + * failed to enter caching mode and no other caching open exists. > + */ > + if (!err) > + fi->iocachectr++; > + else if (fi->iocachectr <= 0) > + clear_bit(FUSE_I_CACHE_IO_MODE, &fi->state); This seems wrong: if the current task is killed, and there's anther task trying to get cached open mode, then clearing FUSE_I_CACHE_IO_MODE will allow new parallel writes, breaking this logic. I'd suggest fixing this by not making the wait killable. It's just a nice to have, but without all the other waits being killable (e.g. filesystem locks) it's just a drop in the ocean. Thanks, Miklos