Re: Consumer-grade SSD in Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sinan,

I would not recommend using 860 EVO or Crucial MX500 SSD's in a Ceph cluster, as those are consumer grade solutions and not enterprise ones.

Performance and durability will be issues. If feasible, I would simply go NVMe  as it sounds like you will be using this disk to store the journal or db partition.


From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Antoine Lecrux <antoine.lecrux@xxxxxxxxxxx>
Sent: Thursday, December 19, 2019 4:02 PM
To: Udo Lembke <ulembke@xxxxxxxxxxxx>; Sinan Polat <sinan@xxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: Consumer-grade SSD in Ceph
 
Hi,

If you're looking for a consumer grade SSD, make sure it has capacitors to protect you from data corruption in case of a power outage on the entire Ceph Cluster.
That's the most important technical specification to look for.

- Antoine

-----Original Message-----
From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> On Behalf Of Udo Lembke
Sent: Thursday, December 19, 2019 3:22 PM
To: Sinan Polat <sinan@xxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Consumer-grade SSD in Ceph

Hi,
if you add on more than one server an SSD with an short lifetime, you can run in real trouble (dataloss)!
Even if, all other SSDs are enterprise grade.
Ceph mix all data in PGs, which are spread over many disks - if one disk fails - no poblem, but if the next two fails after that due high io
(recovery) you will have data loss.
But if you have only one node with consumer SSDs, the whole node can go down without trouble...

I've tried consumer SSDs as yournal a long time ago - was an bad idea!
But this SSDs are cheap - buy one and do the io-test.
If you monitoring the live-time it's perhaps possible for your setup.

Udo


Am 19.12.19 um 20:20 schrieb Sinan Polat:
> Hi all,
>
> Thanks for the replies. I am not worried about their lifetime. We will be adding only 1 SSD disk per physical server. All SSD’s are enterprise drives. If the added consumer grade disk will fail, no problem.
>
> I am more curious regarding their I/O performance. I do want to have 50% drop in performance.
>
> So anyone any experience with 860 EVO or Crucial MX500 in a Ceph setup?
>
> Thanks!
>
>> Op 19 dec. 2019 om 19:18 heeft Mark Nelson <mnelson@xxxxxxxxxx> het volgende geschreven:
>>
>> The way I try to look at this is:
>>
>>
>> 1) How much more do the enterprise grade drives cost?
>>
>> 2) What are the benefits? (Faster performance, longer life, etc)
>>
>> 3) How much does it cost to deal with downtime, diagnose issues, and replace malfunctioning hardware?
>>
>>
>> My personal take is that enterprise drives are usually worth it. There may be consumer grade drives that may be worth considering in very specific scenarios if they still have power loss protection and high write durability.  Even when I was in academia years ago with very limited budgets, we got burned with consumer grade SSDs to the point where we had to replace them all.  You have to be very careful and know exactly what you are buying.
>>
>>
>> Mark
>>
>>
>>> On 12/19/19 12:04 PM, jesper@xxxxxxxx wrote:
>>> I dont think “usually” is good enough in a production setup.
>>>
>>>
>>>
>>> Sent from myMail for iOS
>>>
>>>
>>> Thursday, 19 December 2019, 12.09 +0100 from Виталий Филиппов <vitalif@xxxxxxxxxx>:
>>>
>>>    Usually it doesn't, it only harms performance and probably SSD
>>>    lifetime
>>>    too
>>>
>>>    > I would not be running ceph on ssds without powerloss protection. I
>>>    > delivers a potential data loss scenario
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux