Re: calculating maximum number of disk and node failure that can be handled by cluster with out data loss

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Kevin,

 

Ceph by default will make sure no copies of the data are on the same host. So with a replica count of 3, you could lose 2 hosts without losing any data or operational ability. If by some luck all disk failures were constrained to 2 hosts, you could in theory have up to 8 disks fail. Otherwise if the disk failures are spread amongst the hosts, you could withstand 2 disk failures.

 

Nick

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of kevin parrikar
Sent: 09 June 2015 16:54
To: ceph-users@xxxxxxxxxxxxxx
Subject: calculating maximum number of disk and node failure that can be handled by cluster with out data loss

 

I have 4 node cluster each with 5 disks (4 OSD and 1 Operating system also hosting 3 monitoring process) with default replica 3.

 

Total OSD disks : 16 

Total Nodes : 4

 

How can i calculate the 

  • Maximum number of disk failures my cluster can handle with out  any impact on current data and new writes.
  • Maximum number of node failures  my cluster can handle with out any impact on current data and new writes.

Thanks for any help


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux