I've just did ceph upgrade jewel -> luminous and am facing the same case...
# EC profile
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=3
m=2
plugin=jerasure
technique=reed_sol_van
w=8
5 hosts in the cluster and I run systemctl stop ceph.target on one of them
some PGs from EC pool were remapped (active+clean+remapped state) even when there was not enough hosts in the cluster but some are still in active+undersized+degraded state
root@host01:~# ceph status
cluster:
id: a6f73750-1972-47f6-bcf5-a99753be65ad
health: HEALTH_WARN
Degraded data redundancy: 876/9115 objects degraded (9.611%), 540 pgs unclean, 540 pgs degraded, 540 pgs undersized
services:
mon: 3 daemons, quorum host01,host02,host03
mgr: host01(active), standbys: host02, host03
osd: 60 osds: 48 up, 48 in; 484 remapped pgs
rgw: 3 daemons active
data:
pools: 19 pools, 3736 pgs
objects: 1965 objects, 306 MB
usage: 5153 MB used, 174 TB / 174 TB avail
pgs: 876/9115 objects degraded (9.611%)
2712 active+clean
540 active+undersized+degraded
484 active+clean+remapped
io:
client: 17331 B/s rd, 20 op/s rd, 0 op/s wr
root@host01:~#
Anyone here able to explain this behavior to me ?
Jakub
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com