On Tue, May 11, 2021 at 04:01:13PM +0200, Andreas Gruenbacher wrote: > we have a locking problem in gfs2 that I don't have a proper solution for, so > I'm looking for suggestions. > > What's happening is that a page fault triggers during a read or write > operation, while we're holding a glock (the cluster-wide gfs2 inode > lock), and the page fault requires another glock. We can recognize and > handle the case when both glocks are the same, but when the page fault requires > another glock, there is a chance that taking that other glock would deadlock. So we're looking at something like one file on a gfs2 filesystem being mmaped() and then doing read() or write() to another gfs2 file with the mmaped address being the passed to read()/write()? Have you looked at iov_iter_fault_in_readable() as a solution to your locking order? That way, you bring the mmaped page in first (see generic_perform_write()). > When we realize that we may not be able to take the other glock in gfs2_fault, > we need to communicate that to the read or write operation, which will then > drop and re-acquire the "outer" glock and retry. However, there doesn't seem > to be a good way to do that; we can only indicate that a page fault should fail > by returning VM_FAULT_SIGBUS or similar; that will then be mapped to -EFAULT. > We'd need something like VM_FAULT_RESTART that can be mapped to -EBUSY so that > we can tell the retry case apart from genuine -EFAULT errors. We do have VM_FAULT_RETRY ... does that retry at the wrong level?