The only way to have an osd go down, but not too backfill yet is to not have the osd be marked out. Once again osd is marked out, there is no seeing that would prevent backfilling to occurred to get back to your proper amount of replicas. If you set your config options such that osds didn't mark themselves out, then you would have to do it manually every time.
Even if min_size was there right seeing for this (which it isn't), you don't want to run with min_size if 1. Search for that on the mailing list. There are a few threads talking about why it is terrible and will most likely lead to data corruption.
In truth, there is no configuration to automatically wait until 2 copies of data are down before it backfills to match your original replica count. The only option is to manually mark osds out when you have 2 osds down. This could be scripted, but I would never suggest this because A) it is adding bad complexity to remove how resilient ceph is for data integrity and B) it would require that you run with min_size of 1.
_______________________________________________Hi all,
I am quite new with Ceph Storage. Currently we have a Ceph environment running, but in a few months we will be setting up a new Ceph storage environment.
I have read a lot of information on the Ceph website, but the more information the better for me. What book(s) would you suggest?
I found the following books:
Learning Ceph – Karan Singh (Jan 2015)
Ceph Cookbook – Karan Singh (Feb 2016)
Mastering Ceph – Nick Fisk (May 2017)
Another question;
Ceph is self-healing, it will distribute the replicas to the available OSD’s in case of a failure of one of the OSD’s. Lets say my setup is configured to have 3 replicas, this means when there is a failure of one the OSD’s it will start healing. I want that when an OSD fails and only 2 replicas are left, it shouldn’t do anything, only when also the 2nd OSD fails it should start replicating/healing. Which configuration setting do I need to use, is it the min size option?
Thanks!
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com