On Mon, Nov 26, 2012 at 08:32:54PM -0500, Theodore Ts'o wrote: > On Mon, Nov 26, 2012 at 05:09:08PM -0500, Christoph Hellwig wrote: > > On Mon, Nov 26, 2012 at 04:49:37PM -0500, Theodore Ts'o wrote: > > > Christoph, can you give some kind of estimate for the overhead that > > > adding this locking in XFS actually costs in practice? > > > > I don't know any real life measurements, but in terms of implementation > > the over head is: > > > > a) taking a the rw_semaphore in shared mode for every buffered read > > b) taking the slightly slower exclusive rw_semaphore for buffered writes > > instead of the plain mutex > > > > On the other hand it significantly simplifies the locking for direct > > I/O and allows parallel direct I/O writers. > > I should probably just look at the XFS code, but.... if you're taking > an exclusve lock for buffered writes, won't this impact the > performance of buffered writes happening in parallel on different > CPU's? Indeed it does - see my previous email. But it's no worse than generic_file_aio_write() that takes i_mutex across buffered writes, which is what most filesystems currently do. And FWIW, we also take the i_mutex outside the i_iolock for the buffered write case because generic_file_buffered_write() is documented to require it held. See xfs_rw_ilock() and friends for locking order semantics... FWIW, this buffered write exclusion is why we have been considering replacing the rwsem with a shared/exclusive range lock - so we can do concurrent non-overlapping reads and writes (for both direct IO and buffered IO) without compromising the POSIX atomic write guarantee (i.e. that a read will see the entire write or none of it). Range locking will allow us to do that for both buffered and direct IO... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>