Does Ceph really halve your storage like that? If if you specify N+1,does it really store two copies, or just compute checksums across MxN stripes? I guess Raid5+Ceph with a large array (12 disks say) would be not too bad (2.2TB for each 1). But It would be nicer, if I had 12 storage units in a single rack on a single network, for me to tell CEPH to stripe across them in a RAIDZ fashion, so that I'm only losing 10% of my storage to redundancy... not 50%. -----Original Message----- From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of John-Paul Robinson Sent: Thursday, October 03, 2013 12:08 PM To: ceph-users@xxxxxxxxxxxxxx Subject: Re: Ceph and RAID What is this take on such a configuration? Is it worth the effort of tracking "rebalancing" at two layers, RAID mirror and possibly Ceph if the pool has a redundancy policy. Or is it better to just let ceph rebalance itself when you lose a non-mirrored disk? If following the "raid mirror" approach, would you then skip redundency at the ceph layer to keep your total overhead the same? It seems that would be risky in the even you loose your storage server with the raid-1'd drives. No Ceph level redunancy would then be fatal. But if you do raid-1 plus ceph redundancy, doesn't that mean it takes 4TB for each 1 real TB? ~jpr On 10/02/2013 10:03 AM, Dimitri Maziuk wrote: > I would consider (mdadm) raid-1, dep. on the hardware & budget, > because this way a single disk failure will not trigger a cluster-wide > rebalance. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com