It also depends a lot on the size of your cluster ... I have a test cluster I'm standing up right now with 60 nodes - a total of 600 OSDs each at 4 TB ... If I lose 4 TB - that's a very small fraction of the data. My replicas are going to be spread out across a lot of spindles, and replicating that missing 4 TB isn't much of an issue, across 3 racks each with 80 gbit/sec ToR uplinks to Spine. Each node has 20 gbit/sec to ToR in a bond. On the other hand ... if you only have 4 .. or 8 ... or 10 servers ... and a smaller number of OSDs - you have fewer spindles replicating that loss, and it might be more of an issue. It just depends on the size/scale of your environment. We're going to 8 TB drives - and that will ultimately be spread over a 100 or more physical servers w/ 10 OSD disks per server. This will be across 7 to 10 racks (same network topology) ... so an 8 TB drive loss isn't too big of an issue. Now that assumes that replication actually works well in that size cluster. We're still cessing out this part of the PoC engagement. ~~shane On 7/1/15, 5:05 PM, "ceph-users on behalf of German Anders" <ceph-users-bounces@xxxxxxxxxxxxxx on behalf of ganders@xxxxxxxxxxxx> wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com