Dear Ceph users,
I have one PG in my cluster that is constantly in active+clean+remapped
state. From what I understand there might a problem with the up set:
# ceph pg map 3.5e
osdmap e23638 pg 3.5e (3.5e) -> up [38,78,55,49,40,39,64,2147483647]
acting [38,78,55,49,40,39,64,68]
The last OSD of the up set is NONE, and this is the only PG in my
cluster with this feature. Since the corresponding OSD in the
active set is 68 I tried to put it out of the cluster, but the only
result I got is that now the PG is active+undersized+degraded, the up
set is still missing one OSD, and no recovery operation for it is ongoing:
# ceph pg map 3.5e
osdmap e23640 pg 3.5e (3.5e) -> up [38,78,55,49,40,39,64,2147483647]
acting [38,78,55,49,40,39,64,2147483647]
I found no clue on the web about how to solve this, so I'd need help.
Thanks,
Nicola
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx