On Tue, Mar 21, 2017 at 8:57 AM, Karol Babioch <karol@xxxxxxxxxx> wrote:
Hi,
Am 20.03.2017 um 05:34 schrieb Christian Balzer:
> you do realize that you very much have a corner case setup there, right?
Yes, I know that this is not exactly a recommendation, but I hoped it
would be good enough for the start :-).
> That being said, if you'd search the archives, a similar question was
> raised by me a long time ago.
Do you have some sort of reference to this? Sounds interesting, but
couldn't find a particular thread, and you posted quite a lot on this
list already :-).
> The new CRUSH map of course results in different computations of where PGs
> should live, so they get copied to their new primary OSDs.
> This is the I/O you're seeing and that's why it stops eventually.
Hm, ok, that might be an explanation. Haven't considered the fact that
it gets removed from the CRUSH map and a new location is calculated. Is
there a way to prevent this in my case?
If an OSD doesn't respond it will be marked as down and then after some time (default 300sec) it will be marked as out
Data will start to move once the OSD is marked out (i.e. no longer part of the crush map) which is what you are observing.
The settings you are probably interested in are (docs from here: http://docs.ceph.com/docs/jewel/rados/configuration/mon-osd-interaction/)
1. mon osd down out interval - defaults to 300sec after which a down OSD will be marked out
2. mon osd down out subtree limit - will prevent down OSDs being marked out automatically if the whole subtree disappears. This defaults to rack - if you change it to host then turning off an entire host should prevent all those OSDs from being marked out automatically
1. mon osd down out interval - defaults to 300sec after which a down OSD will be marked out
2. mon osd down out subtree limit - will prevent down OSDs being marked out automatically if the whole subtree disappears. This defaults to rack - if you change it to host then turning off an entire host should prevent all those OSDs from being marked out automatically
Thank you very much for your insights!
Best regards,
Karol Babioch
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com