On Mon, Feb 24 2025, Miklos Szeredi wrote: > On Thu, 30 Jan 2025 at 11:16, Luis Henriques <luis@xxxxxxxxxx> wrote: >> >> Userspace filesystems can push data for a specific inode without it being >> explicitly requested. This can be accomplished by using NOTIFY_STORE. >> However, this may race against another process performing different >> operations on the same inode. >> >> If, for example, there is a process reading from it, it may happen that it >> will block waiting for data to be available (locking the folio), while the >> FUSE server will also block trying to lock the same folio to update it with >> the inode data. >> >> The easiest solution, as suggested by Miklos, is to allow the userspace >> filesystem to skip locked folios. > > Not sure. > > The easiest solution is to make the server perform the two operations > independently. I.e. never trigger a notification from a request. > > This is true of other notifications, e.g. doing FUSE_NOTIFY_DELETE > during e.g. FUSE_RMDIR will deadlock on i_mutex. Hmmm... OK, the NOTIFY_DELETE and NOTIFY_INVAL_ENTRY deadlocks are documented (in libfuse, at least). So, maybe this one could be added to the list of notifications that could deadlock. However, IMHO, it would be great if this could be fixed instead. > Or am I misunderstanding the problem? I believe the initial report[1] actually adds a specific use-case where the deadlock can happen when the server performs the two operations independently. For example: - An application reads 4K of data at offset 0 - The server gets a read request. It performs the read, and gets more data than the data requested (say 4M) - It caches this data in userspace and replies to VFS with 4K of data - The server does a notify_store with the reminder data - In the meantime the userspace application reads more 4K at offset 4K The last 2 operations can race and the server may deadlock if the application already has locked the page where data will be read into. Does it make sense? [1] https://lore.kernel.org/CH2PR14MB41040692ABC50334F500789ED6C89@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Cheers, -- Luís