RE: precondition ssd drives w/ fio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Brian L. [mailto:brianclam@xxxxxxxxx]
> Sent: Friday, October 04, 2013 11:23 AM
> To: Jeffrey Mcvay (jmcvay)
> Cc: Juergen Salk; fio@xxxxxxxxxxxxxxx
> Subject: Re: precondition ssd drives w/ fio
> 
> On Thu, Oct 3, 2013 at 2:39 PM, Jeffrey Mcvay (jmcvay)
> <jmcvay@xxxxxxxxxx> wrote:
> >
> >> -----Original Message-----
> >> From: fio-owner@xxxxxxxxxxxxxxx [mailto:fio-owner@xxxxxxxxxxxxxxx]
> On
> >> Behalf Of Brian L.
> >> Sent: Thursday, October 03, 2013 2:09 PM
> >> To: Juergen Salk; fio@xxxxxxxxxxxxxxx
> >> Subject: Re: precondition ssd drives w/ fio
> >>
> >> Thank you all for the response.  I found ssd-steadystate.fio in the
> >> tarball.
> >>
> >> Brian
> >>
> >> Brian L.
> >>
> >>
> >> On Thu, Oct 3, 2013 at 1:42 PM, Juergen Salk <juergen.salk@xxxxxx>
> >> wrote:
> >> > * Brian L. <brianclam@xxxxxxxxx> [131003 12:31]:
> >> >
> >> >> For benchmarking SSD drives, I was told that we should
> precondition
> >> >> our drives to get more accurate real world reading.
> >> >>
> >> >> I was wondering if anyone is using fio itself to precondition ssd
> >> >> drives or use a different script to populate random data on the
> >> >> drives?
> >> >
> >> > Yes I did. The fio source tarball comes with a number of sample
> >> > job files including `ssd-steadystate.fio´, which might be
> >> > useful for preconditioning of ssd devices.
> >> >
> >> > Regards,
> >> >
> >> > Juergen
> >> >
> >> > --
> >> > GPG A997BA7A | 87FC DA31 5F00 C885 0DC3  E28F BD0D 4B33 A997 BA7A
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe fio" in
> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> > Brian,
> >
> > Check the SNIA standard for how to test SSD performance.
> >
> > There are quite likely more elegant solutions, but I use the jobfile
> below.  It is run sequential workloads of 100% write, 100% read, 100%
> write, and finally 100% read. The purpose of the reads is to force any
> existing bad or marginal blocks out of the user pool. This will
> precondition the drive. Please be aware that using numjobs > 1 is may
> not be a sequential workload.
> >
> > To measure steady state values takes a little more effort. I use the
> write_bw_log feature, run for several hours, and use statistical
> analysis to find the mean and standard deviation in the target time
> frame.
> >
> > Cheers,
> > Jeff
> >
> > [global]
> > thread
> > group_reporting=1
> > direct=1
> > norandommap=1
> > randrepeat=0
> > refill_buffers
> > ioengine=${IOENGINE}
> > filename=${FILENAME}
> >
> > log_avg_msec=1000
> >
> > [128kB_SeqWr_1x8_1stPass]
> > write_bw_log=128kB_SeqWr_1x8_1stPass
> > bs=128k
> > rw=write
> > numjobs=1
> > iodepth=8
> >
> > [128kB_SeqRd_1x8_1ndPass]
> > stonewall
> > write_bw_log=128kB_SeqRd_1x8_1ndPass
> > bs=128k
> > rw=read
> > numjobs=1
> > iodepth=8
> >
> >
> > [128kB_SeqWr_1x8_2ndPass]
> > stonewall
> > write_bw_log=128kB_SeqWr_1x8_2ndPass
> > bs=128k
> > rw=write
> > numjobs=1
> > iodepth=8
> >
> > [128kB_SeqRd_1x8_2ndPass]
> > stonewall
> > write_bw_log=128kB_SeqRd_1x8_2ndPass
> > bs=128k
> > rw=read
> > numjobs=1
> > iodepth=8
> 
> 
> thanks Jeffrey,
> 
> I think I am going to use your FIO template since the example
> ssd-steadystate.fio (in the tarball) takes about 24 hr to run for each
> drive because section [random-write-steady] runs over the entire
> drive.   The drives I am testing range from 200GB up to 800GB.
> 
> That is way too long for me as I have 16 drives!  I have been testing
> the drives at different rw profiles (also different blocksize and diff
> iodepth at times).
> 
> I usually limit my run to size=10GB since I have so many drives that I
> need to run against different profiles, blocksize, etc...  What is the
> size (in GB) / time limit that that you guys generally run against
> each drive for good results?
> 
> 
> 
> ===
> ssd-steadystate.fio template below
> ===
> 
> [global]
> ioengine=libaio
> direct=1
> group_reporting=1
> #filename=/dev/fioa
> filename=${mydisk}
> 
> [sequential-fill]
> description=Sequential fill phase
> rw=write
> iodepth=16
> bs=1M
> 
> [random-write-steady]
> stonewall
> description=Random write steady state phase
> rw=randwrite
> bs=4K
> iodepth=32
> numjobs=4
> #
> # might need to fix var below
> #
> write_bw_log=${mydisk}-steady-state

To truly profile an SSD you have to use the entire drive. If you restrict the size written to a fraction of the drive an SSD will use the remainder as increased overprovision to reduce write amplification and performance. 

Also, most SSD's will not access NAND for reads to LBAs not already written. When you measure a read to an unwritten LBA you actual measure the speed at which the SSD can create 0's and ship them back.

To get to your question on how long to run a random workload before measuring performance refer to the image at the link below.

http://www.ssdperformanceblog.com/wp-content/uploads/2010/07/SSDstates.png

Prior to section A the drive was preconditioned with 2 sequential fills. 

Section A: This the performance seen before background reclamation starts. The drive is writing to NAND blocks that are already erases. 
Section B: Reclamation starts. This is the lowest performance seen by an SSD. The drive is moving user data to free blocks for erase. This is also the time of maximum write amplification rate. 
Section C: The drive has reached steady state operation. Background reclamation is still running, but at a lower rate. This is due to more user data being invalidated and less user data movement required to free a block for erase. The rate of write amplification will also be consistent and overall write amplification will trend to this value over time.

How long you need to run a random workload to reach steady state depends on the SSD. The best method graph the data and look for the steady state performance. A more statistical approach is to measure the mean performance over some unit of time and compare this to several such samples. When mean performance samples reach an acceptable minimum level of variance, your done.

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux