Re: Don't benchmark with fio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 26 Apr 2020 at 16:30, Seena Fallah <seenafallah@xxxxxxxxx> wrote:
>
> On Sun, Apr 26, 2020 at 7:33 PM Sitsofe Wheeler <sitsofe@xxxxxxxxx> wrote:
> >
> > On Sun, 26 Apr 2020 at 14:17, Seena Fallah <seenafallah@xxxxxxxxx> wrote:
> > >
> > > Thanks all for your replies.
> > >
> > > Maybe if you like it would be a good feature in fio to support this
> > > type of needs :)
> > >
> >
> > cd /to/filesystem; fio --size=100k --bs=4k --rw=write --name=notabenchmark
> >
> > ? Obviously this could go only to the cache so maybe you want
> > end_fsync=1 (https://fio.readthedocs.io/en/latest/fio_man.html#cmdoption-arg-end-fsync
> > ) etc. If you really want to do only one I/O I suppose you could
> > change the block size to 100k...
>
> I have tried it but still going to benchmark. I just want to
> read/write for example 4 IO to see how long does it take?

You could try and use something like thinktime=1s thinktime_blocks=4
time_based=1 runtime=1m but you still wouldn't see live latency of
only the just completed events (see below)...

> > > I have one more question about xfs_io, I don't know if it's a right
> > > place but if it not I'm sorry. I have run xfs_io and gets this result:
> > > 100.000000 bytes, 1 ops; 0.0000 sec (1.514 MiB/sec and 15873.0159 ops/sec)
> > > What is the 15873.0159 ops/sec? Did xfs_io really do 15873.0159 iops or 1iops?
> >
> > What if it did only 1 op but timed how long it took to do all the I/O
> > (e.g. around 63 microseconds)? When you average that out...
>
> The main goal is I am writing a prober for my file system that is
> attack to a vm for example and I don't want to load on my file system.
> I just want to probe it and see if it writes 4 IO in a time that was
> done last time or not?

Ah OK I see what you mean: "I want to know the live and ongoing
/latency/ of writing 100k" (à la ping for networks as opposed to
netperf). The other mail mentioning ioping sounds like what you need
and for what you are saying I'd focus on around the time it took for
the I/O complete.. I'd argue you will never know the bandwidth (i.e.
the maximum I/O you could send at a given instance) because in your
scenario simply aren't sending enough data to reach it (unless your
disk is incredibly weak) - extrapolation from a couple of tiny
requests won't reflect the true saturation point.

> > You can see the code here:
> > https://git.kernel.org/pub/scm/fs/xfs/xfsprogs-dev.git/tree/io/pwrite.c#n468
>
> I see the code but it wasn't in a pattern I have sent to you. Am I wrong?

Did you read through the report_io_times() function too?

-- 
Sitsofe | http://sucs.org/~sits/




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux