RE: FIO windows

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just a little back story, I am testing a 88core Dell R930 with 30 NVMe drives, (6) Kingston PCIe NVMe each has 4 m.2 drive ie. 24 drives. Then there is (6) 2.5"  NVMe u.2 drives.

I started out with my usual tools benchmark tools and was not able to get anything above 36GB/s and I had to run the multiple version of the tools at the same time.

I found Diskspd  for windows and it has a lot of control with threads and affinity I was able to reach 52.7GB/s which is pretty close to theoretical max of the drives of about 56GB/s.

I am trying to achieve the same results or close with FIO so I can compares Windows with Centos and ultimately our version of Redhat. I am having issue tuning FIO much above 14GB/s

-Dave

-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@xxxxxxxxx] 
Sent: Tuesday, October 31, 2017 3:45 PM
To: David Hare <david.hare@xxxxxxxxxxxxxxx>
Cc: Jens Axboe <axboe@xxxxxxxxx>; fio@xxxxxxxxxxxxxxx
Subject: Re: FIO windows

Hmm, I can't reproduce the problem here but still it's curious. Do you get the same problem with one file and if so after the job runs can you check what size the file was?

Is there anything special about the filesystems? Are they local NTFS and quite small (less than 16TBytes)? Do they have a custom cluster size?

On 31 October 2017 at 22:33, David Hare <david.hare@xxxxxxxxxxxxxxx> wrote:
> You may be on to something!
>
> I tried 3 drives, got the exact same results. See attached.
>
>
>
> [global]
>
> ioengine=windowsaio
> blocksize=64k
> direct=1
>
>
> thread
> size=250m
>
>
>
> time_based
> runtime=10
>
>
> [asdf]
> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile
>
> ;:I\:\\testfile:J\:\\testfile:K\:\\testfile:L\:\\testfile:M\:\\testfil
> e:N\:\\testfile ;:O\:\\testfile:P\:\\testfile:Q\:\\testfile
>
>
> -Dave
>
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@xxxxxxxxx]
> Sent: Tuesday, October 31, 2017 3:15 PM
> To: David Hare <david.hare@xxxxxxxxxxxxxxx>
> Cc: Jens Axboe <axboe@xxxxxxxxx>; fio@xxxxxxxxxxxxxxx
> Subject: Re: FIO windows
>
> One idea is that you are seeing the effect of trying to do I/O to a 
> file that is not a multiple of the blocksize. In theory if you have 
> size=1g and you have 9 files then each file ends up being 1024**3/9.0 
> ~
> 119304647.1111111 big (see
> http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-filenam
> e for where this is described). Could it be that Windows goes on to 
> make a file that is smaller than what we were asking for?
>
> If this theory were right you might see a similar problem if you were 
> only using 3 files.
>
> On 31 October 2017 at 22:06, David Hare <david.hare@xxxxxxxxxxxxxxx> wrote:
>> Yes.. I made a typo when I changed it back, sorry.
>>
>> -----Original Message-----
>> From: Sitsofe Wheeler [mailto:sitsofe@xxxxxxxxx]
>> Sent: Tuesday, October 31, 2017 3:05 PM
>> To: David Hare <david.hare@xxxxxxxxxxxxxxx>
>> Cc: Jens Axboe <axboe@xxxxxxxxx>; fio@xxxxxxxxxxxxxxx
>> Subject: Re: FIO windows
>>
>> Yes that's right. Also previously did you mean you had set size=512m 
>> even though you wrote size=512g ?
>>
>> On 31 October 2017 at 22:03, David Hare <david.hare@xxxxxxxxxxxxxxx>
>> wrote:
>>> I assume you want me to change the size parameter with a 64k 
>>> blocksize as everything is working with 16k blocksize?
>>>
>>> -----Original Message-----
>>> From: Sitsofe Wheeler [mailto:sitsofe@xxxxxxxxx]
>>> Sent: Tuesday, October 31, 2017 2:54 PM
>>> To: David Hare <david.hare@xxxxxxxxxxxxxxx>
>>> Cc: Jens Axboe <axboe@xxxxxxxxx>; fio@xxxxxxxxxxxxxxx
>>> Subject: Re: FIO windows
>>>
>>> Hi,
>>>
>>> Can you add unlink=1 and keep reducing the size parameter (e.g. down 
>>> to 128m then down to 16m then down to 4m then down to 1m then down 
>>> to 512k etc)?
>>>
>>> Can you attach the full output that's produced it fails with this 
>>> reduced job?
>>>
>>> IF you are make the problem happen with very little I/O being done 
>>> (i.e. the job bombs out after doing less than 1MiBytes worth of I/O) 
>>> you can try adding --debug=all to the job and seeing if that offers 
>>> any clues as to what the last thing it was doing was?
>>>
>>> On 31 October 2017 at 21:46, David Hare <david.hare@xxxxxxxxxxxxxxx>
>>> wrote:
>>>> It was ok with or without the colon, the size didn’t seem to make a 
>>>> difference, but blocksize did.. see the commented block sizes below.
>>>>
>>>> fio2.fio
>>>> [global]
>>>>
>>>> ioengine=windowsaio
>>>>
>>>> ;blocksize=64k - error
>>>> ;blocksize=32k - error
>>>> ;blocksize=16k - no error
>>>>
>>>> blocksize=16k
>>>>
>>>> direct=1
>>>>
>>>> thread
>>>>
>>>> size=512g
>>>>
>>>>
>>>>
>>>> time_based
>>>> runtime=10
>>>>
>>>> [asdf]
>>>> filename=F\:\\testfile:G\:\\testfile:H\:\\testfile:I\:\\testfile:J\:
>>>> \ \ 
>>>> testfile:K\:\\testfile:L\:\\testfile:M\:\\testfile:P\:\\testfile
>>>>
>>>> Results:
>>>> Run status group 0 (all jobs):
>>>> READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), 
>>>> io=1413MiB (1481MB), run=10001-10001msec
>>>>
>>>>
>>>> -Dave
>>
>> --
>> Sitsofe | http://sucs.org/~sits/
>>
>>
>> Disclaimer
>>
>> The information contained in this communication from the sender is 
>> confidential. It is intended solely for use by the recipient and 
>> others authorized to receive it. If you are not the recipient, you 
>> are hereby notified that any disclosure, copying, distribution or 
>> taking action in relation of the contents of this information is 
>> strictly prohibited and may be unlawful.
>
>
>
> --
> Sitsofe | http://sucs.org/~sits/
>
>
> Disclaimer
>
> The information contained in this communication from the sender is 
> confidential. It is intended solely for use by the recipient and 
> others authorized to receive it. If you are not the recipient, you are 
> hereby notified that any disclosure, copying, distribution or taking 
> action in relation of the contents of this information is strictly 
> prohibited and may be unlawful.



--
Sitsofe | http://sucs.org/~sits/

��.n��������+%������w��{.n�������^n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�

[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux