Re: xfs seems performance lower when long time seq write into ssd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le Mon, 31 Jul 2017 16:51:29 +0800
王勇 <wang.yong@xxxxxxxxxxx> écrivait:

> Hi All,
> Recently, I meet a  strange quetion. I have done seq write
> (blocksize=1m) into mount folders.
> If total bytes is small, the avg write rate is 370MBs (little files :
>  4MB * 256 * 60 )
> If total bytes is big, the avg write rate is 180MBs  (a lot of files :
>   4MB * 256 * 600)
> the raw ssd benchmark, seq write is 400MBs.
> 
> Can anybody help to explain it?  argument is wrong, or something else.

There are several problems here:

First you didn't mention which version of xfsprogs you're using (try
"xfs_repair -V" for instance). You didn't say what your SSD is like
(make, model, size, flash type, interface, etc), either.

Second, you didn't tell the exact command lines you used in each of
your tests (small, big, and raw): "dd if=/dev/zero...", "iozone", etc?
and how you measured throughput: is it overall, mean throughput, as
reported by "dd" after the fact, or did you sample the performance at
some points during the test?

Third, SSD don't work like HDD. Particularly, you can't simply
oeverwrite on data blocks that are marked unused but still hold data;
you must erase them first. Worse, you write pages (typically 4K) but
erase much larger blocks (typically 128K or more). 

So you can't benchmark an SSD properly by simply writing again and
again; you MUST use the "trim" command beforehand, to clean up blocks
that have been written but NOT actually erased. Else the SSD controller
will need to perform some "garbage collection" while writing, creating
slowdowns or even pauses.
Also notice that many SSD controllers perform on-the-fly compression.
That may greatly affect performance.

Let's say that your SSD is 1000 GB. You run 10 tests with a 60GB
data set. That fills in 600 GB of flash.

You erased the files, but *that doesn't necessarily clear the flash*. So
when you try writing 600 GB the next time, the SSD will use its
remaining 400 GB, then becomes very slow for the last 200GB because it
must clear up some space before each write... 

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@xxxxxxxxxxxxxx>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

Attachment: pgpsHrPATblIA.pgp
Description: Signature digitale OpenPGP


[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux