Re: precondition ssd drives w/ fio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 3, 2013 at 2:39 PM, Jeffrey Mcvay (jmcvay)
<jmcvay@xxxxxxxxxx> wrote:
>
>> -----Original Message-----
>> From: fio-owner@xxxxxxxxxxxxxxx [mailto:fio-owner@xxxxxxxxxxxxxxx] On
>> Behalf Of Brian L.
>> Sent: Thursday, October 03, 2013 2:09 PM
>> To: Juergen Salk; fio@xxxxxxxxxxxxxxx
>> Subject: Re: precondition ssd drives w/ fio
>>
>> Thank you all for the response.  I found ssd-steadystate.fio in the
>> tarball.
>>
>> Brian
>>
>> Brian L.
>>
>>
>> On Thu, Oct 3, 2013 at 1:42 PM, Juergen Salk <juergen.salk@xxxxxx>
>> wrote:
>> > * Brian L. <brianclam@xxxxxxxxx> [131003 12:31]:
>> >
>> >> For benchmarking SSD drives, I was told that we should precondition
>> >> our drives to get more accurate real world reading.
>> >>
>> >> I was wondering if anyone is using fio itself to precondition ssd
>> >> drives or use a different script to populate random data on the
>> >> drives?
>> >
>> > Yes I did. The fio source tarball comes with a number of sample
>> > job files including `ssd-steadystate.fio´, which might be
>> > useful for preconditioning of ssd devices.
>> >
>> > Regards,
>> >
>> > Juergen
>> >
>> > --
>> > GPG A997BA7A | 87FC DA31 5F00 C885 0DC3  E28F BD0D 4B33 A997 BA7A
>> --
>> To unsubscribe from this list: send the line "unsubscribe fio" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> Brian,
>
> Check the SNIA standard for how to test SSD performance.
>
> There are quite likely more elegant solutions, but I use the jobfile below.  It is run sequential workloads of 100% write, 100% read, 100% write, and finally 100% read. The purpose of the reads is to force any existing bad or marginal blocks out of the user pool. This will precondition the drive. Please be aware that using numjobs > 1 is may not be a sequential workload.
>
> To measure steady state values takes a little more effort. I use the write_bw_log feature, run for several hours, and use statistical  analysis to find the mean and standard deviation in the target time frame.
>
> Cheers,
> Jeff
>
> [global]
> thread
> group_reporting=1
> direct=1
> norandommap=1
> randrepeat=0
> refill_buffers
> ioengine=${IOENGINE}
> filename=${FILENAME}
>
> log_avg_msec=1000
>
> [128kB_SeqWr_1x8_1stPass]
> write_bw_log=128kB_SeqWr_1x8_1stPass
> bs=128k
> rw=write
> numjobs=1
> iodepth=8
>
> [128kB_SeqRd_1x8_1ndPass]
> stonewall
> write_bw_log=128kB_SeqRd_1x8_1ndPass
> bs=128k
> rw=read
> numjobs=1
> iodepth=8
>
>
> [128kB_SeqWr_1x8_2ndPass]
> stonewall
> write_bw_log=128kB_SeqWr_1x8_2ndPass
> bs=128k
> rw=write
> numjobs=1
> iodepth=8
>
> [128kB_SeqRd_1x8_2ndPass]
> stonewall
> write_bw_log=128kB_SeqRd_1x8_2ndPass
> bs=128k
> rw=read
> numjobs=1
> iodepth=8


thanks Jeffrey,

I think I am going to use your FIO template since the example
ssd-steadystate.fio (in the tarball) takes about 24 hr to run for each
drive because section [random-write-steady] runs over the entire
drive.   The drives I am testing range from 200GB up to 800GB.

That is way too long for me as I have 16 drives!  I have been testing
the drives at different rw profiles (also different blocksize and diff
iodepth at times).

I usually limit my run to size=10GB since I have so many drives that I
need to run against different profiles, blocksize, etc...  What is the
size (in GB) / time limit that that you guys generally run against
each drive for good results?



===
ssd-steadystate.fio template below
===

[global]
ioengine=libaio
direct=1
group_reporting=1
#filename=/dev/fioa
filename=${mydisk}

[sequential-fill]
description=Sequential fill phase
rw=write
iodepth=16
bs=1M

[random-write-steady]
stonewall
description=Random write steady state phase
rw=randwrite
bs=4K
iodepth=32
numjobs=4
#
# might need to fix var below
#
write_bw_log=${mydisk}-steady-state
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux