Re: Writing to /dev/null with fio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 02 2010, Jens Axboe wrote:
> On Tue, Feb 02 2010, Bart Van Assche wrote:
> > On Tue, Feb 2, 2010 at 9:19 AM, Jens Axboe <jens.axboe@xxxxxxxxxx> wrote:
> > > On Tue, Feb 02 2010, Bart Van Assche wrote:
> > >> The reason I started running such silly tests is because I noticed
> > >> that tests with dd and a small block size complete in a shorter time
> > >> than tests with fio for a fast storage device (e.g. remote RAM disk
> > >> accessed via SRP or iSER). Do the two tests below trigger similar
> > >> system calls ? The ratio of fio time / dd time is about 1.50 for block
> > >> size 512 and about 1.15 for block size 4096.
> > >
> > > Fio definitely has more overhead than a simple read() to buf, write buf
> > > to /dev/null. If you switch off the stat calculations, it'll drop
> > > somewhat (use --gtod_reduce=1). But even then it's going to be slower
> > > than dd. Fio is modular and supports different IO engines etc, so the IO
> > > path is going to be a lot longer than with dd. The flexibility of fio
> > > does come at a cost. If you time(1) fio and dd, you'll most likely see a
> > > lot more usr time in fio.
> > >
> > > That said, it is probably time to do some profiling and make sure that
> > > fio is as fast as it can be.
> > 
> > That would definitely be appreciated. I would like to switch from dd
> > to fio for storage system benchmarking, something I can't do yet
> > because of the different results reported by the two tools.
> 
> So the first thing I noticed is that you get an lseek() because fio
> doesn't track the sequential nature of that job. How close do you get
> for bs=512 with using --gtod_reduce=1 and commenting out the lseek() in
> engines/sync.c:fio_syncio_prep()?

I committed a fix for this, it'll do it automatically now if you use
latest -git. A quick test here on my desktop machine:

fio --bs=4k --size=64G --buffered=1 --rw=write --verify=0
--name=/dev/null --gtod_reduce=1 --disk_util=0

10378MB/s

fio --bs=512 --size=8G --buffered=1 --rw=write --verify=0
--name=/dev/null --gtod_reduce=1 --disk_util=0

1306MB/s

dd if=/dev/zero of=/dev/null bs=4k count=16M
68719476736 bytes (69 GB) copied, 9,71235 s, 7,1 GB/s

dd if=/dev/zero of=/dev/null bs=512 count=8M
4294967296 bytes (4,3 GB) copied, 3,3574 s, 1,3 GB/s

So the same for 512b buffers, fio is much quicker for 64k (must be due
to proper aligning). I'll boot the big box and see what that says.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux