Fwd: FIO 3.11

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





-------- Forwarded Message --------
Subject: 	FIO 3.11
Date: 	Wed, 17 Oct 2018 18:45:03 +0000
From: 	Etienne-Hugues Fortin <efortin@xxxxxxxxxxxxxxxx>
To: 	axboe@xxxxxxxxx <axboe@xxxxxxxxx>



Hi,

 

Sorry to bother you with those questions but I’ve been unable to find the answers by myself. I’ve been experimenting with FIO for a little over a month but I’m still unsure about those aspects.

 

First, when we create a profile, my assumption is that if I have two jobs without using ‘stonewall’, each job will run at the same time and will have the same priority. So, if I want to do a job with a 50% read/write but with 25% random and 75% sequential, I would need a job that look like this:

 

[128k_ random]

rw=randread

[128k_ seq_1]

numjobs=3

rw=read

[128k_ random_w]

rw=randwrite

[128k_ seq_1_w]

numjobs=3

rw=write

 

Here,  I have exactly 6 jobs on 8 (75%) sequential and of this, 50% are read. I’ve run this job over a standard NFS server that has SSD as the backend. As I have 4 clients running FIO simultaneous, I’m getting 4 results which are 65/113, 68/108, 67/115, 66/111 MiB. That’s about 37% read, 63% write. What would explain the fact that I’m getting more write then read with this scenario? I’ve also run the job on an all-flash array that do NAS and I got similar results. So, I think this may have with the speed at which the storage is answering. On standard storage unity, write are usually cached in memory so we get acknowledgement faster. I could make the hypothesis that as soon as FIO is getting an answer, it will resend a new IO right away which would then skew the results the way I’m seeing it. Is that what is happening? If so, outside of trying to work the numjobs required to get about 50% R/W, is there a better way of doing this?

 

Related to the same simulation, I saw that we can also use rw and randrw to combine read/write in sequential/random workload. Then, we can set the percentage of the mixed with rwmixread and/or rwmixwrite. What I’ve not been able to find is if the read/write are made on each file or not. With a single jobs, it has to be. However, if I put numjobs=10, will all of the 10 files got read and write or I can have 40% of the file to get the write and 60% of them to get read? My current thinking is that the mixed workload is executed on each file. If that’s the case, that’s not what I want to simulate. Is there another way of doing this?

 

I also did some testing on raw device (block storage) and I tried the latency target profile. I can see that if I give it the latency target needed (0.75 ms) with a 75 percentile, it will get to about the level of IOPS that the unit is able to sustain while being under the target, 75% of the time. However, I’m not seeing the iothread gradually increasing as the test is progressing. As such, creating the final job file is still not as simple as looking at the result and taking the iothread that make sense. Did I missed something in the results or the latency profile job? My job file is the following:

 

[global]

ioengine=libaio

invalidate=1

iodepth=32

direct=1

latency_target=750

latency_window=30s

latency_percentile=75

per_job_logs=0

group_reporting

bs=32k

rw=randrw

rwmixread=60

write_bw_log=blocks

write_lat_log=blocks

write_iops_log=blocks

 

[sdb-32k]

filename=/dev/sdb

[sdc-32k]

filename=/dev/sdc

[sdd-32k]

filename=/dev/sdd

[sde-32k]

filename=/dev/sde

 

The last thing is about the bw/iops/lat logs. I’ve used them in the latency test and in some other jobfile and the filename format I’m getting is always filename_[bw|clat|iops|lat|slat].log.server_name

 

With this format, the fio_generate_plots and fio2gnuplot can’t find any files. It seems those program expect to find files in the format *_bw.log so with no servername at the end. Is there a way to get the servername at the beginning of the filename? The documentation seems to indicate that it should end in .log but that’s not what I’m getting up to now.

 

Thank you for your assistance and keep up the good work on such a great tool.

 

Etienne Fortin




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux