On Tue, Feb 02 2010, Bart Van Assche wrote: > On Tue, Feb 2, 2010 at 9:19 AM, Jens Axboe <jens.axboe@xxxxxxxxxx> wrote: > > On Tue, Feb 02 2010, Bart Van Assche wrote: > >> The reason I started running such silly tests is because I noticed > >> that tests with dd and a small block size complete in a shorter time > >> than tests with fio for a fast storage device (e.g. remote RAM disk > >> accessed via SRP or iSER). Do the two tests below trigger similar > >> system calls ? The ratio of fio time / dd time is about 1.50 for block > >> size 512 and about 1.15 for block size 4096. > > > > Fio definitely has more overhead than a simple read() to buf, write buf > > to /dev/null. If you switch off the stat calculations, it'll drop > > somewhat (use --gtod_reduce=1). But even then it's going to be slower > > than dd. Fio is modular and supports different IO engines etc, so the IO > > path is going to be a lot longer than with dd. The flexibility of fio > > does come at a cost. If you time(1) fio and dd, you'll most likely see a > > lot more usr time in fio. > > > > That said, it is probably time to do some profiling and make sure that > > fio is as fast as it can be. > > That would definitely be appreciated. I would like to switch from dd > to fio for storage system benchmarking, something I can't do yet > because of the different results reported by the two tools. So the first thing I noticed is that you get an lseek() because fio doesn't track the sequential nature of that job. How close do you get for bs=512 with using --gtod_reduce=1 and commenting out the lseek() in engines/sync.c:fio_syncio_prep()? Alternatively, using --ioengine=psync would remove that overhead as well. But realize that fio will never be as fast as dd completely for plain sync and sequential IO, it's just not possible. -- Jens Axboe -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html