Re: how possible is that ceph cluster crash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Never *ever* use nobarrier with ceph under *any* circumstances.  I
cannot stress this enough.
-Sam

On Fri, Nov 18, 2016 at 10:39 AM, Craig Chi <craigchi@xxxxxxxxxxxx> wrote:
> Hi Nick and other Cephers,
>
> Thanks for your reply.
>
>> 2) Config Errors
>> This can be an easy one to say you are safe from. But I would say most
>> outages and data loss incidents I have seen on the mailing
>> lists have been due to poor hardware choice or configuring options such as
>> size=2, min_size=1 or enabling stuff like nobarriers.
>
> I am wondering the pros and cons of the nobarrier option used by Ceph.
>
> It is well known that nobarrier is dangerous when power outage happens, but
> if we already have replicas in different racks or PDUs, will Ceph reduce the
> risk of data lost with this option?
>
> I have seen many performance tuning articles providing nobarrier option in
> xfs, but there are not many of then mention the trade-off of nobarrier.
>
> Is it really unacceptable to use nobarrier in production environment? I will
> be much grateful if you guys are willing to share any experiences about
> nobarrier and xfs.
>
> Sincerely,
> Craig Chi (Product Developer)
> Synology Inc. Taipei, Taiwan. Ext. 361
>
> On 2016-11-17 05:04, Nick Fisk <nick@xxxxxxxxxx> wrote:
>
>> -----Original Message-----
>> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
>> Pedro Benites
>> Sent: 16 November 2016 17:51
>> To: ceph-users@xxxxxxxxxxxxxx
>> Subject:  how possible is that ceph cluster crash
>>
>> Hi,
>>
>> I have a ceph cluster with 50 TB, with 15 osds, it is working fine for one
>> year and I would like to grow it and migrate all my old
> storage,
>> about 100 TB to ceph, but I have a doubt. How possible is that the cluster
>> fail and everything went very bad?
>
> Everything is possible, I think there are 3 main risks
>
> 1) Hardware failure
> I would say Ceph is probably one of the safest options in regards to
> hardware failures, certainly if you start using 4TB+ disks.
>
> 2) Config Errors
> This can be an easy one to say you are safe from. But I would say most
> outages and data loss incidents I have seen on the mailing
> lists have been due to poor hardware choice or configuring options such as
> size=2, min_size=1 or enabling stuff like nobarriers.
>
> 3) Ceph Bugs
> Probably the rarest, but potentially the most scary as you have less
> control. They do happen and it's something to be aware of
>
> How reliable is ceph?
>> What is the risk about lose my data.? is necessary backup my data?
>
> Yes, always backup your data, no matter solution you use. Just like RAID !=
> Backup, neither does ceph.
>
>>
>> Regards.
>> Pedro.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> Sent from Synology MailPlus
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux