Re: Samsung PM863 SSD: surprisingly high Write IOPS measured using `fio`, over 4.6 times more than spec!?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sitsofe,

On Tue, Feb 15, 2022 at 12:32 PM Durval Menezes (MML) <jmmml@xxxxxxxxxx> wrote:
> On Mon, Feb 14, 2022 at 4:51 PM Sitsofe Wheeler <sitsofe@xxxxxxxxx> wrote:
> > [...]
> > The 18K IOPS value
> > might be when the drive has been fully written and there are no
> > pre-erased blocks available (via so-called preconditioning)... I'll
> > also note the whitepaper [1] mentions this:
> >
> > 	SSD Precondition: Sustained state (or steady state)
> > 	[...]
> > 	It's important to note that all performance items mentioned in this
> > 	white paper have been measured at the sustained state, except the
> > 	sequential read/write performance
>
> Thanks for going through the whitepaper and picking this up. It passed
> right by me...
>
> Anyway.... hummmrmrmrmr... I did a full "Secure erase" on the drive before
> starting these tests... perhaps that was it?
>
> Anyway, I went through the whitepaper again, and found this:
>
> 	The sustained state in this document refers to the status that a
> 	128 KB sequential write has been completed equal to the drive capacity and
> 	then 4 KB random write has completed twice as much as the drive capacity
>
> OK, so at least there's a "recipe" for this preconditioning. I will try it
> and come back later to report.

That nailed it! Here's what I did to implement the "recipe":

a) Wrote random data in 128KB-sized blocks sequentially to the drive, until reaching the end of the device:

	date; openssl enc -rc4-40 -pass "pass:`dd bs=128 count=1 </dev/urandom 2>/dev/null`" </dev/zero | dd bs=128K of=/dev/sda oflag=direct iflag=fullblock; date
		Mon Feb 14 17:42:03 -03 2022
		dd: error writing '/dev/sda': No space left on device
		14651363+0 records in
		14651362+0 records out
		error writing output file
		1920383410176 bytes (1.9 TB, 1.7 TiB) copied, 6460.56 s, 297 MB/s
		Mon Feb 14 19:29:44 -03 2022

b) And then run a randomwrite FIO test with 4KB blocks and size equal to twice the SSD capacity:

	fdisk -l /dev/sda | grep ^Disk
		Disk /dev/sda: 1.8 TiB, 1920383410176 bytes, 3750748848 sectors
	export SIZE=1920383410176
	date; fio --filename=/dev/sda --name=device_iops_write --rw=randwrite --bs=4k --iodepth=32 --numjobs=1 --size=${SIZE} --io_size=`expr ${SIZE} \* 2` --ioengine=libaio --direct=1 --group_reporting
		Mon Feb 14 19:29:44 -03 2022
		device_iops_write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
		fio-3.1
		Starting 1 process
		[...]
		Jobs: 1 (f=1): [w(1)][11.1%][r=0KiB/s,w=71.6MiB/s][r=0,w=18.3k IOPS][eta
		07h:51m:36s]
		[...]
		^C
		fio: terminating on signal 2

So, it didn't even take writing 2x the SSD capacity with random data to
bring write IOPS down to specs: just 11.1% of it was enough (and led me to
interrupt the test, no use eating up more write cycles out of its NAND
since the 'issue' is now explained).

Therefore, end of the case. Thank you very much for helping me nail this
down, I really like it when things make sense.

Cheers,
-- 
   Durval.



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux