As per your ceph status it seems that you have 19 pools, all of them are erasure coded as 3+2? It seems that when you taken the node offline ceph could move some of the PGs to other nodes (it seems that that one or more pools does not require all 5 osds to be healty. Maybe they are replicated, or not 3+2 erasure coded?) Theese pgs are the active+clean+remapped. (Ceph could
successfully put theese on other osds to maintain the replica
count / erasure coding profile, and this remapping process
completed) Some other pgs do seem to require all 5 osds to be present, these are the "undersized" ones.
One other thing, if your failure domain is osd and not host or a
larger unit, then Ceph will not try to place all replicas on
different servers, just different osds, hence it can satisfy the
criteria even if one of the hosts are down. This setting would be
highly inadvisable on a production system! Denes. On 11/30/2017 02:45 PM, David Turner
wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com