On Fri, 2009-08-21 at 10:20 -0400, Christoph Hellwig wrote: > On Fri, Aug 21, 2009 at 01:40:10PM +0200, Jens Axboe wrote: > > I've talked to Chris about this in the past too, but I never got around > > to benchmarking FUA for O_DIRECT. It should be pretty easy to wire up > > without making too many changes, and we do have FUA support on most SATA > > drives too. Basically just a check in the driver for whether the > > request is O_DIRECT and a WRITE, ala: > > > > if (rq_data_dir(rq) == WRITE && rq_is_sync(rq)) > > WRITE_FUA; > > > > I know that FUA is used by that other OS, so I think we should be golden > > on the hw support side. > > Just doing FUA should be pretty easy, in fact from my reading of the > code we already use FUA for barriers if supported, that is only drain > the queue, do a pre-flush for a barrier and then issue the actual > barrier write as FUA. I've never really understood why FUA is considered equivalent to a barrier. Our barrier semantics are that all I/Os before the barrier should be safely on disk after the barrier executes. The FUA semantics are that *this write* should be safely on disk after it executes ... it can still leave preceding writes in the cache. I can see that if you're only interested in metadata that making every metadata write a FUA and leaving the cache to sort out data writes does give FS image consistency. How does FUA give us linux barrier semantics? James -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html