active+clean+remapped is not a healthy state for a PG. If it actually we're going to a new osd it would say backfill+wait or backfilling and eventually would get back to active+clean.
I'm not certain what the active+clean+remapped state means. Perhaps a PG query, PG dump, etc can give more insight. In any case, this is not a healthy state and you're still testing removing a node to have less than you need to be healthy.
On Thu, Nov 30, 2017, 5:38 AM Jakub Jaszewski <jaszewski.jakub@xxxxxxxxx> wrote:
I've just did ceph upgrade jewel -> luminous and am facing the same case...# EC profilecrush-failure-domain=hostcrush-root=defaultjerasure-per-chunk-alignment=falsek=3m=2plugin=jerasuretechnique=reed_sol_vanw=85 hosts in the cluster and I run systemctl stop ceph.target on one of themsome PGs from EC pool were remapped (active+clean+remapped state) even when there was not enough hosts in the cluster but some are still in active+undersized+degraded stateroot@host01:~# ceph statuscluster:id: a6f73750-1972-47f6-bcf5-a99753be65adhealth: HEALTH_WARNDegraded data redundancy: 876/9115 objects degraded (9.611%), 540 pgs unclean, 540 pgs degraded, 540 pgs undersizedservices:mon: 3 daemons, quorum host01,host02,host03mgr: host01(active), standbys: host02, host03osd: 60 osds: 48 up, 48 in; 484 remapped pgsrgw: 3 daemons activedata:pools: 19 pools, 3736 pgsobjects: 1965 objects, 306 MBusage: 5153 MB used, 174 TB / 174 TB availpgs: 876/9115 objects degraded (9.611%)2712 active+clean540 active+undersized+degraded484 active+clean+remappedio:client: 17331 B/s rd, 20 op/s rd, 0 op/s wrroot@host01:~#Anyone here able to explain this behavior to me ?_______________________________________________Jakub
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com