On Wed, Dec 2, 2015 at 8:24 PM, Dallas Clement <dallas.a.clement@xxxxxxxxx> wrote: > On Wed, Dec 2, 2015 at 8:18 PM, Phil Turmel <philip@xxxxxxxxxx> wrote: >> On 12/02/2015 07:12 PM, Dallas Clement wrote: >>> All measurements computed from bandwidth averages taken on 12 disk >>> array with XFS filesytem using fio with direct=1, sync=1, >>> invalidate=1. >> >> Why do you need direct=1 and sync=1 ? Have you checked an strace from >> the app you are trying to model that shows it uses these? >> >>> Seems incredulous!? >> >> Not with those options. Particularly sync=1. That causes an inode >> stats update and a hardware queue flush after every write operation. >> Support for that on various devices has changed over time. >> >> I suspect if you do a bisect on the kernel to pinpoint the change(s) >> that is doing this, you'll find a patch that closes a device-specific or >> filesystem sync bug or something that enables deep queues for a device. >> >> Modern software that needs file integrity guarantees make sparse use of >> fdatasync and/or fsync and avoid sync entirely. You'll have a more >> believable test if you use fsync_on_close=1 or end_fsync=1. >> >> Phil > > Hi Phil. Hmm that makes sense that something may have changed wrt to > syncing. Basically what I am trying to do with my fio testing is > avoid any asynchronous or caching behavior. I'm not sure that the sync=1 has any effect in this case where I've got direct=1 set (for non buffered I/O). I think the sync=1 flag only matters for buffered I/O. I really shouldn't be setting that flag at all. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html