Understanding iostat and vmstat output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I realize this question isn't specifically fio related, but I imagine
many people on this list can accurately answer this question, and I
think the answer would be valuable for anyone doing disk IO
benchmarking/analysis. I looked for vmstat and iostat mailing lists
and documentation and came up short. I apologize if this is just
entirely inappropriate for this list.

I have noticed that while doing continuous writes with dd (simply
streaming from a partition to a file), vmstat's bo (blocks out)
counter sits at zero for long lengths of time while iostat's w/s (and
related) show significant write traffic. vmstat's bo jumps from 0
occasionally, and I have definitely correlated some of these bo counts
to hitting dirty_background_ratio.

I have noticed the same behavior when using fio in a write mode with
direct=0. In contrast, with fio's direct=1, vmstat's bo always shows
non-zero while iostat's w/sec and friends also show non-zero.

So, it seems like iostat is reporting all writes to disks/partitions
even if they are serviced by cached, but vmstat only reports bo/bi if
data is read or written from a disk (ex: when dirty pages are
flushed).

Is this accurate?

Thanks, and again, apologies if this is considered totally
inappropriate for this list.
Andy
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux