Hi,
we really don't know anything about your cluster (version, health
status, osd tree, crush rules). So at this point one can only assume
what *could* have happened.
Degraded and misplaced PGs isn't that bad, as long as there's any
recovery going on (you let that part out of your status).
Assuming you're using mclock, the current recommendation is to switch
back to wpq and use the "legacy" recovery settings to control backfill
behavior.
Another assumption is that maybe after removing the node CRUSH doesn't
find enough hosts/OSDs to fulfill the rule for your pool(s). But for
that to answer we would need more information (as stated above).
Regards,
Eugen
Zitat von Devender Singh <devender@xxxxxxxxxx>:
Hello all
Urgent help needed. No recovery happening.
Tried repairing pg and redeploy or create.
Rebooted cluster but no luck..
data:
volumes: 2/2 healthy
pools: 18 pools, 817 pgs
objects: 6.06M objects, 20 TiB
usage: 30 TiB used, 302 TiB / 332 TiB avail
pgs: 2846742/29091911 objects degraded (9.785%)
2654404/29091911 objects misplaced (9.124%)
516 active+clean
105 active+undersized+degraded
Regards
Dev
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx