On Mon, May 2, 2016 at 8:41 AM, Boaz Harrosh <boaz@xxxxxxxxxxxxx> wrote: > On 04/29/2016 12:16 AM, Vishal Verma wrote: >> All IO in a dax filesystem used to go through dax_do_io, which cannot >> handle media errors, and thus cannot provide a recovery path that can >> send a write through the driver to clear errors. >> >> Add a new iocb flag for DAX, and set it only for DAX mounts. In the IO >> path for DAX filesystems, use the same direct_IO path for both DAX and >> direct_io iocbs, but use the flags to identify when we are in O_DIRECT >> mode vs non O_DIRECT with DAX, and for O_DIRECT, use the conventional >> direct_IO path instead of DAX. >> > > Really? What are your thinking here? > > What about all the current users of O_DIRECT, you have just made them > 4 times slower and "less concurrent*" then "buffred io" users. Since > direct_IO path will queue an IO request and all. > (And if it is not so slow then why do we need dax_do_io at all? [Rhetorical]) > > I hate it that you overload the semantics of a known and expected > O_DIRECT flag, for special pmem quirks. This is an incompatible > and unrelated overload of the semantics of O_DIRECT. I think it is the opposite situation, it us undoing the premature overloading of O_DIRECT that went in without performance numbers. This implementation clarifies that dax_do_io() handles the lack of a page cache for buffered I/O and O_DIRECT behaves as it nominally would by sending an I/O to the driver. It has the benefit of matching the error semantics of a typical block device where a buffered write could hit an error filling the page cache, but an O_DIRECT write potentially triggers the drive to remap the block. _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs