Re: Sharing SSD journals and SSD drive choice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/25/2017 11:57 AM, David wrote:
> On 19 Apr 2017 18:01, "Adam Carheden" <carheden@xxxxxxxx
> <mailto:carheden@xxxxxxxx>> wrote:
> 
>     Does anyone know if XFS uses a single thread to write to it's journal?
> 
> 
> You probably know this but just to avoid any confusion, the journal in
> this context isn't the metadata journaling in XFS, it's a separate
> journal written to by the OSD daemons

Ha! I didn't know that.

> 
> I think the number of threads per OSD is controlled by the 'osd op
> threads' setting which defaults to 2

So the ideal (for performance) CEPH cluster would be one SSD per HDD
with 'osd op threads' set to whatever value fio shows as the optimal
number of threads for that drive then?

> I would avoid the SanDisk and Hynix. The s3500 isn't too bad. Perhaps
> consider going up to a 37xx and putting more OSDs on it. Of course with
> the caveat that you'll lose more OSDs if it goes down. 

Why would you avoid the SanDisk and Hynix? Reliability (I think those
two are both TLC)? Brand trust? If it's my benchmarks in my previous
email, why not the Hynix? It's slower than the Intel, but sort of
decent, at lease compared to the SanDisk.

My final numbers are below, including an older Samsung Evo (MCL I think)
which did horribly, though not as bad as the SanDisk. The Seagate is a
10kRPM SAS "spinny" drive I tested as a control/SSD-to-HDD comparison.

         SanDisk SDSSDA240G, fio  1 jobs:   7.0 MB/s (5 trials)


         SanDisk SDSSDA240G, fio  2 jobs:   7.6 MB/s (5 trials)


         SanDisk SDSSDA240G, fio  4 jobs:   7.5 MB/s (5 trials)


         SanDisk SDSSDA240G, fio  8 jobs:   7.6 MB/s (5 trials)


         SanDisk SDSSDA240G, fio 16 jobs:   7.6 MB/s (5 trials)


         SanDisk SDSSDA240G, fio 32 jobs:   7.6 MB/s (5 trials)


         SanDisk SDSSDA240G, fio 64 jobs:   7.6 MB/s (5 trials)


HFS250G32TND-N1A2A 30000P10, fio  1 jobs:   4.2 MB/s (5 trials)


HFS250G32TND-N1A2A 30000P10, fio  2 jobs:   0.6 MB/s (5 trials)


HFS250G32TND-N1A2A 30000P10, fio  4 jobs:   7.5 MB/s (5 trials)


HFS250G32TND-N1A2A 30000P10, fio  8 jobs:  17.6 MB/s (5 trials)


HFS250G32TND-N1A2A 30000P10, fio 16 jobs:  32.4 MB/s (5 trials)


HFS250G32TND-N1A2A 30000P10, fio 32 jobs:  64.4 MB/s (5 trials)


HFS250G32TND-N1A2A 30000P10, fio 64 jobs:  71.6 MB/s (5 trials)


                SAMSUNG SSD, fio  1 jobs:   2.2 MB/s (5 trials)


                SAMSUNG SSD, fio  2 jobs:   3.9 MB/s (5 trials)


                SAMSUNG SSD, fio  4 jobs:   7.1 MB/s (5 trials)


                SAMSUNG SSD, fio  8 jobs:  12.0 MB/s (5 trials)


                SAMSUNG SSD, fio 16 jobs:  18.3 MB/s (5 trials)


                SAMSUNG SSD, fio 32 jobs:  25.4 MB/s (5 trials)


                SAMSUNG SSD, fio 64 jobs:  26.5 MB/s (5 trials)


        INTEL SSDSC2BB150G7, fio  1 jobs:  91.2 MB/s (5 trials)


        INTEL SSDSC2BB150G7, fio  2 jobs: 132.4 MB/s (5 trials)


        INTEL SSDSC2BB150G7, fio  4 jobs: 138.2 MB/s (5 trials)


        INTEL SSDSC2BB150G7, fio  8 jobs: 116.9 MB/s (5 trials)


        INTEL SSDSC2BB150G7, fio 16 jobs:  61.8 MB/s (5 trials)
        INTEL SSDSC2BB150G7, fio 32 jobs:  22.7 MB/s (5 trials)
        INTEL SSDSC2BB150G7, fio 64 jobs:  16.9 MB/s (5 trials)
        SEAGATE ST9300603SS, fio  1 jobs:   0.7 MB/s (5 trials)
        SEAGATE ST9300603SS, fio  2 jobs:   0.9 MB/s (5 trials)
        SEAGATE ST9300603SS, fio  4 jobs:   1.6 MB/s (5 trials)
        SEAGATE ST9300603SS, fio  8 jobs:   2.0 MB/s (5 trials)
        SEAGATE ST9300603SS, fio 16 jobs:   4.6 MB/s (5 trials)
        SEAGATE ST9300603SS, fio 32 jobs:   6.9 MB/s (5 trials)
        SEAGATE ST9300603SS, fio 64 jobs:   0.6 MB/s (5 trials)

For those who come across this and are looking for drives for purposes
other than CEPH, those are all sequential write numbers with caching
disabled, a very CEPH-journal-specific test. The SanDisk held it's own
against the Intel using some benchmarks on Windows that didn't disable
caching. It may very well be a perfectly good drive for other purposes.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux