On Thu, Sep 8, 2016 at 10:56 AM, Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote: > > ... and brings back a lot of other crap. The thing is, pipe lock should > be on the outside of everything fs might be taking, so that splice IO > is the same as normal IO as far as filesystem locking is concerned. For > the write side it had been done in that commit, for the read side it's yet > to be done. Al, look at generic_file_splice_read(). The problem is that XFS takes its own xfs_rw_ilock(ip, XFS_IOLOCK_SHARED); .. xfs_rw_iunlock(ip, XFS_IOLOCK_SHARED); around all the generic file accessors. So for example, it doesn't use "generic_file_read_iter()", it does STATIC ssize_t xfs_file_buffered_aio_read( struct kiocb *iocb, struct iov_iter *to) { struct xfs_inode *ip = XFS_I(file_inode(iocb->ki_filp)); ssize_t ret; trace_xfs_file_buffered_read(ip, iov_iter_count(to), iocb->ki_pos); xfs_rw_ilock(ip, XFS_IOLOCK_SHARED); ret = generic_file_read_iter(iocb, to); xfs_rw_iunlock(ip, XFS_IOLOCK_SHARED); return ret; } and the exact same pattern holds for generic_file_splice_read(). So the XFS model *requires* that XFS_IOLOCK to be outside all the operations. But then in iter_file_splice_write we have the other ordering. Now, xfs could do a wrapper for ->splice_write() like it used to, and have that same "take the xfs lock around the call to iter_file_splice_write(). That was my first obvious patch. I threw it out because that's garbage too: then we end up doing ->write_iter(), which takes the xfs_rw_ilock() again, and would immediately deadlock *there* instead. So the problem really is that the vfs layer seems to simply not allow the filesystem to do any locking around the generic page cache helper functions. And XFS really wants to do that. Linus _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs