Disk slows down while empting write buffer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am trying to figure out why end_fsync causes the overall disk write
to take longer.  I'm aware that without this option, the test may
return before all data is flushed to disk, but I am monitoring actual
disk writes with iostat.  At the beginning of the test, iostat shows
165MBps writing to each disk, and fio shows over 3000MBps (because
it's writing to the buffer, and the buffer is simultaneously going to
disk).
Near the end, fio drops to 0MBps which means it's waiting for fsync to
finish.  At this time, iostat drops to 120MBps per disk.

fio --name=w --rw=write --bs=1M --size=30G --directory=/mnt/stripe --end_fsync=1

If I instead run this command with end_fsync=0 then iostat shows
steady 165MBps the whole time.

I have observed this issue on CentOS with an mdadm soft RAID with XFS.
And also on FreeBSD with a ZFS pool.  If I instead run it on a single
disk formatted XFS then it does not use the cache at all, it just
writes steady at 165MBps, I'm not sure why that is.

I'd appreciate any insights to this.
-Elliott



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux