Re: Sharing SSD journals and SSD drive choice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

>>What I'm trying to get from the list is /why/ the "enterprise" drives 
>>are important. Performance? Reliability? Something else? 

performance, for sure (for SYNC write, https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/)

Reliabity : yes, enteprise drive have supercapacitor in case of powerfailure, and endurance (1 DWPD for 3520, 3 DWPD for 3610)


>>Also, 4 x Intel DC S3520 costs as much as 1 x Intel DC S3610. Obviously
>>the single drive leaves more bays free for OSD disks, but is there any
>>other reason a single S3610 is preferable to 4 S3520s? Wouldn't 4xS3520s
>>mean:

where do you see this price difference ?

for me , S3520 are around 25-30% cheaper than S3610


----- Mail original -----
De: "Adam Carheden" <carheden@xxxxxxxx>
À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Mercredi 26 Avril 2017 16:53:48
Objet: Re:  Sharing SSD journals and SSD drive choice

What I'm trying to get from the list is /why/ the "enterprise" drives 
are important. Performance? Reliability? Something else? 

The Intel was the only one I was seriously considering. The others were 
just ones I had for other purposes, so I thought I'd see how they fared 
in benchmarks. 

The Intel was the clear winner, but my tests did show that throughput 
tanked with more threads. Hypothetically, if I was throwing 16 OSDs at 
it, all with osd op threads = 2, do the benchmarks below not show that 
the Hynix would be a better choice (at least for performance)? 

Also, 4 x Intel DC S3520 costs as much as 1 x Intel DC S3610. Obviously 
the single drive leaves more bays free for OSD disks, but is there any 
other reason a single S3610 is preferable to 4 S3520s? Wouldn't 4xS3520s 
mean: 

a) fewer OSDs go down if the SSD fails 

b) better throughput (I'm speculating that the S3610 isn't 4 times 
faster than the S3520) 

c) load spread across 4 SATA channels (I suppose this doesn't really 
matter since the drives can't throttle the SATA bus). 


-- 
Adam Carheden 

On 04/26/2017 01:55 AM, Eneko Lacunza wrote: 
> Adam, 
> 
> What David said before about SSD drives is very important. I will tell 
> you another way: use enterprise grade SSD drives, not consumer grade. 
> Also, pay attention to endurance. 
> 
> The only suitable drive for Ceph I see in your tests is SSDSC2BB150G7, 
> and probably it isn't even the most suitable SATA SSD disk from Intel; 
> better use S3610 o S3710 series. 
> 
> Cheers 
> Eneko 
> 
> El 25/04/17 a las 21:02, Adam Carheden escribió: 
>> On 04/25/2017 11:57 AM, David wrote: 
>>> On 19 Apr 2017 18:01, "Adam Carheden" <carheden@xxxxxxxx 
>>> <mailto:carheden@xxxxxxxx>> wrote: 
>>> 
>>> Does anyone know if XFS uses a single thread to write to it's 
>>> journal? 
>>> 
>>> 
>>> You probably know this but just to avoid any confusion, the journal in 
>>> this context isn't the metadata journaling in XFS, it's a separate 
>>> journal written to by the OSD daemons 
>> Ha! I didn't know that. 
>> 
>>> I think the number of threads per OSD is controlled by the 'osd op 
>>> threads' setting which defaults to 2 
>> So the ideal (for performance) CEPH cluster would be one SSD per HDD 
>> with 'osd op threads' set to whatever value fio shows as the optimal 
>> number of threads for that drive then? 
>> 
>>> I would avoid the SanDisk and Hynix. The s3500 isn't too bad. Perhaps 
>>> consider going up to a 37xx and putting more OSDs on it. Of course with 
>>> the caveat that you'll lose more OSDs if it goes down. 
>> Why would you avoid the SanDisk and Hynix? Reliability (I think those 
>> two are both TLC)? Brand trust? If it's my benchmarks in my previous 
>> email, why not the Hynix? It's slower than the Intel, but sort of 
>> decent, at lease compared to the SanDisk. 
>> 
>> My final numbers are below, including an older Samsung Evo (MCL I think) 
>> which did horribly, though not as bad as the SanDisk. The Seagate is a 
>> 10kRPM SAS "spinny" drive I tested as a control/SSD-to-HDD comparison. 
>> 
>> SanDisk SDSSDA240G, fio 1 jobs: 7.0 MB/s (5 trials) 
>> 
>> 
>> SanDisk SDSSDA240G, fio 2 jobs: 7.6 MB/s (5 trials) 
>> 
>> 
>> SanDisk SDSSDA240G, fio 4 jobs: 7.5 MB/s (5 trials) 
>> 
>> 
>> SanDisk SDSSDA240G, fio 8 jobs: 7.6 MB/s (5 trials) 
>> 
>> 
>> SanDisk SDSSDA240G, fio 16 jobs: 7.6 MB/s (5 trials) 
>> 
>> 
>> SanDisk SDSSDA240G, fio 32 jobs: 7.6 MB/s (5 trials) 
>> 
>> 
>> SanDisk SDSSDA240G, fio 64 jobs: 7.6 MB/s (5 trials) 
>> 
>> 
>> HFS250G32TND-N1A2A 30000P10, fio 1 jobs: 4.2 MB/s (5 trials) 
>> 
>> 
>> HFS250G32TND-N1A2A 30000P10, fio 2 jobs: 0.6 MB/s (5 trials) 
>> 
>> 
>> HFS250G32TND-N1A2A 30000P10, fio 4 jobs: 7.5 MB/s (5 trials) 
>> 
>> 
>> HFS250G32TND-N1A2A 30000P10, fio 8 jobs: 17.6 MB/s (5 trials) 
>> 
>> 
>> HFS250G32TND-N1A2A 30000P10, fio 16 jobs: 32.4 MB/s (5 trials) 
>> 
>> 
>> HFS250G32TND-N1A2A 30000P10, fio 32 jobs: 64.4 MB/s (5 trials) 
>> 
>> 
>> HFS250G32TND-N1A2A 30000P10, fio 64 jobs: 71.6 MB/s (5 trials) 
>> 
>> 
>> SAMSUNG SSD, fio 1 jobs: 2.2 MB/s (5 trials) 
>> 
>> 
>> SAMSUNG SSD, fio 2 jobs: 3.9 MB/s (5 trials) 
>> 
>> 
>> SAMSUNG SSD, fio 4 jobs: 7.1 MB/s (5 trials) 
>> 
>> 
>> SAMSUNG SSD, fio 8 jobs: 12.0 MB/s (5 trials) 
>> 
>> 
>> SAMSUNG SSD, fio 16 jobs: 18.3 MB/s (5 trials) 
>> 
>> 
>> SAMSUNG SSD, fio 32 jobs: 25.4 MB/s (5 trials) 
>> 
>> 
>> SAMSUNG SSD, fio 64 jobs: 26.5 MB/s (5 trials) 
>> 
>> 
>> INTEL SSDSC2BB150G7, fio 1 jobs: 91.2 MB/s (5 trials) 
>> 
>> 
>> INTEL SSDSC2BB150G7, fio 2 jobs: 132.4 MB/s (5 trials) 
>> 
>> 
>> INTEL SSDSC2BB150G7, fio 4 jobs: 138.2 MB/s (5 trials) 
>> 
>> 
>> INTEL SSDSC2BB150G7, fio 8 jobs: 116.9 MB/s (5 trials) 
>> 
>> 
>> INTEL SSDSC2BB150G7, fio 16 jobs: 61.8 MB/s (5 trials) 
>> INTEL SSDSC2BB150G7, fio 32 jobs: 22.7 MB/s (5 trials) 
>> INTEL SSDSC2BB150G7, fio 64 jobs: 16.9 MB/s (5 trials) 
>> SEAGATE ST9300603SS, fio 1 jobs: 0.7 MB/s (5 trials) 
>> SEAGATE ST9300603SS, fio 2 jobs: 0.9 MB/s (5 trials) 
>> SEAGATE ST9300603SS, fio 4 jobs: 1.6 MB/s (5 trials) 
>> SEAGATE ST9300603SS, fio 8 jobs: 2.0 MB/s (5 trials) 
>> SEAGATE ST9300603SS, fio 16 jobs: 4.6 MB/s (5 trials) 
>> SEAGATE ST9300603SS, fio 32 jobs: 6.9 MB/s (5 trials) 
>> SEAGATE ST9300603SS, fio 64 jobs: 0.6 MB/s (5 trials) 
>> 
>> For those who come across this and are looking for drives for purposes 
>> other than CEPH, those are all sequential write numbers with caching 
>> disabled, a very CEPH-journal-specific test. The SanDisk held it's own 
>> against the Intel using some benchmarks on Windows that didn't disable 
>> caching. It may very well be a perfectly good drive for other purposes. 
>> 
>> _______________________________________________ 
>> ceph-users mailing list 
>> ceph-users@xxxxxxxxxxxxxx 
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> 
> 
> 
_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux