I have 4 node cluster each with 5 disks (4 OSD and 1 Operating system also hosting 3 monitoring process) with default replica 3.
Total OSD disks : 16
Total Nodes : 4
- Maximum number of disk failures my cluster can handle with out any impact on current data and new writes.
- Maximum number of node failures my cluster can handle with out any impact on current data and new writes.
Thanks for any help
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com