On Mon, Nov 26, 2012 at 04:49:37PM -0500, Theodore Ts'o wrote: > On Mon, Nov 26, 2012 at 03:13:08PM -0500, Christoph Hellwig wrote: > > On Mon, Nov 26, 2012 at 12:05:57PM -0800, Hugh Dickins wrote: > > > Gosh, that's a very sudden new consensus. The consensus over the past > > > ten or twenty years has been that the Linux kernel enforce locking for > > > consistent atomic writes, but skip that overhead on reads - hasn't it? > > > > I'm not sure there was much of a consensus ever. We XFS people always > > ttried to push everyone down the strict rule, but there was enough > > pushback that it didn't actually happen. > > Christoph, can you give some kind of estimate for the overhead that > adding this locking in XFS actually costs in practice? It doesn't show up any significant numbers in profiles, if that is what you are asking. I've tested over random 4k reads and writes at over 2 million IOPS to a single file using concurrent direct IO, so the non-exclusive locking overhead is pretty minimal. If the workload is modified slightly to used buffered writes instead of direct IO writes and so triggering shared/exclusive lock contention, then the same workload tends to get limited at around 250,000 IOPS per file. That's a direct result of the exclusive locking limiting the workload to what a single CPU can sustain (i.e difference between 8p @ 250-300k iops vs 1p @ 250k iops on the exclusive locking workload). > And does XFS > provide any kind of consistency guarantees if the reads/write overlap > spans multiple pages? I assume the answer to that is no, correct? A buffered write is locked exclusive for the entire of the write. That includes multiple page writes as the locking is outside of the begin_write/end_write per-page iteration. Hence the atomicity of the entire buffered write against both buffered read and direct IO is guaranteed. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html