RE: precondition ssd drives w/ fio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> -----Original Message-----
> From: fio-owner@xxxxxxxxxxxxxxx [mailto:fio-owner@xxxxxxxxxxxxxxx] On
> Behalf Of Brian L.
> Sent: Thursday, October 03, 2013 2:09 PM
> To: Juergen Salk; fio@xxxxxxxxxxxxxxx
> Subject: Re: precondition ssd drives w/ fio
> 
> Thank you all for the response.  I found ssd-steadystate.fio in the
> tarball.
> 
> Brian
> 
> Brian L.
> 
> 
> On Thu, Oct 3, 2013 at 1:42 PM, Juergen Salk <juergen.salk@xxxxxx>
> wrote:
> > * Brian L. <brianclam@xxxxxxxxx> [131003 12:31]:
> >
> >> For benchmarking SSD drives, I was told that we should precondition
> >> our drives to get more accurate real world reading.
> >>
> >> I was wondering if anyone is using fio itself to precondition ssd
> >> drives or use a different script to populate random data on the
> >> drives?
> >
> > Yes I did. The fio source tarball comes with a number of sample
> > job files including `ssd-steadystate.fio´, which might be
> > useful for preconditioning of ssd devices.
> >
> > Regards,
> >
> > Juergen
> >
> > --
> > GPG A997BA7A | 87FC DA31 5F00 C885 0DC3  E28F BD0D 4B33 A997 BA7A
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Brian,

Check the SNIA standard for how to test SSD performance.

There are quite likely more elegant solutions, but I use the jobfile below.  It is run sequential workloads of 100% write, 100% read, 100% write, and finally 100% read. The purpose of the reads is to force any existing bad or marginal blocks out of the user pool. This will precondition the drive. Please be aware that using numjobs > 1 is may not be a sequential workload. 

To measure steady state values takes a little more effort. I use the write_bw_log feature, run for several hours, and use statistical  analysis to find the mean and standard deviation in the target time frame. 

Cheers,
Jeff

[global]
thread
group_reporting=1
direct=1
norandommap=1
randrepeat=0
refill_buffers
ioengine=${IOENGINE}
filename=${FILENAME}

log_avg_msec=1000

[128kB_SeqWr_1x8_1stPass]
write_bw_log=128kB_SeqWr_1x8_1stPass
bs=128k
rw=write
numjobs=1
iodepth=8

[128kB_SeqRd_1x8_1ndPass]
stonewall
write_bw_log=128kB_SeqRd_1x8_1ndPass
bs=128k
rw=read
numjobs=1
iodepth=8


[128kB_SeqWr_1x8_2ndPass]
stonewall
write_bw_log=128kB_SeqWr_1x8_2ndPass
bs=128k
rw=write
numjobs=1
iodepth=8

[128kB_SeqRd_1x8_2ndPass]
stonewall
write_bw_log=128kB_SeqRd_1x8_2ndPass
bs=128k
rw=read
numjobs=1
iodepth=8
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux