On 10/30/2017 02:33 PM, shadow_lin
wrote:
Hi, That is a consideration between how important your data is, how quickly you think you will notice [even if Ceph somehow doesn't or can't deal with it because of imperfect configuration or such] and how quickly you ultimately are done rebuilding redundancy. I'd suggest to imagine a bad scenario for how long it will take and just consider how nervous it'd make you to be down to x OSDs/hosts of redundancy for that duration. Apart from that, Ceph isn't a super stable super easy to comprehend type of software yet (no offense to anyone intended - you're overall doing a good job!). As far as I can guess, there could be the occasional moment when one extra copy allows you to use a simpler, more reliable "tabula rasa" approach to host / OSD problem fixing, while staying safe enough. shadow_lin wrote:
3 OSDs failing before you / Ceph restores them is a less likely event in a pool of 6 than one of 12. If you had k=10 million m=2 with 12TB HDD, it'd probably take just some seconds to minutes until you have data loss. shadow_lin wrote:
I don't actually know that - but the default is not bad if you ask me (not that it features writes faster than reads). Plus it lets you pick m. - Michael |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com