Hi, biggest issue with replica size 2 is that if you find an inconsistent object you will not be able to tell which copy is the correct one. With replica size 3 you could assume that those 2 copies that are the same are correct. Until Ceph guarantees stored data integrity (that is - until we have production-ready Bluestore), I would not go with replica size 2. On 23.09.2016 09:02, Götz Reinicke - IT Koordinator wrote: > Hi, > > Am 23.09.16 um 05:55 schrieb Zhongyan Gu: >> Hi there, >> the default rbd pool replica size is 3. However, I found that in our >> all ssd environment, capacity become a cost issue. We want to save >> more capacity. So one option is change the replica size from 3 to 2. >> anyone can share the experience of pros vs cons regarding replica size >> 2 vs 3? > from my (still limited) POV, one main aspect is: how reliabel is your > hardware if you think off this? How often will a disk break, a server > crash, a datacenter burn down, a networkswitch fail? And if there is a > failure, how fast could that broken part be replaced or how fast is your > availabel hardware to replicate the lost OSD to the remaining system. > > I dont have numbers, but for our first initial cluster we go as well > with a repl size of 2 and I dont have bad feelings yet when i look at > the server and network infrastrukture we got. > > Others with more experiacne will give some other hints and may be > numbers. I never found some sort of calculator which can say "Oh you get > this hardware? Than a repl size of x y z is what you need." > > HTH a bit . Regards . Götz > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Tomasz Kuzemko tomasz.kuzemko@xxxxxxxxxxxx
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com