Hi Kevin, Ceph by default will make sure no copies of the data are on the same host. So with a replica count of 3, you could lose 2 hosts without losing any data or operational ability. If by some luck all disk failures were constrained to 2 hosts, you could in theory have up to 8 disks fail. Otherwise if the disk failures are spread amongst the hosts, you could withstand 2 disk failures. Nick From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of kevin parrikar I have 4 node cluster each with 5 disks (4 OSD and 1 Operating system also hosting 3 monitoring process) with default replica 3. Total OSD disks : 16 Total Nodes : 4 How can i calculate the
Thanks for any help |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com