I’ve seen that before (over 100%) but I forget the cause. At any rate, the way I replace disks is to first set the osd weight to 0, wait for data to rebalance, then down / out the osd. I don’t think ceph does
any reads from a disk once you’ve marked it out so hopefully there are other copies. From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of Drew Weaver Howdy, I replaced a disk today because it was marked as Predicted failure. These were the steps I took ceph osd out osd17 ceph -w #waited for it to get done systemctl stop ceph-osd@osd17 ceph osd purge osd17 --yes-i-really-mean-it umount /var/lib/ceph/osd/ceph-osdX I noticed that after I ran the ‘osd out’ command that it started moving data around. 19446/16764 objects degraded (115.999%)
ß I noticed that number seems odd So then I replaced the disk Created a new label on it Ceph-deploy osd prepare OSD5:sdd THIS time, it started rebuilding 40795/16764 objects degraded (243.349%)
ß Now I’m really concerned. Perhaps I don’t quite understand what the numbers are telling me but is it normal for it to rebuilding more objects than exist? Thanks, -Drew |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com