Re: Replaced a disk, first time. Quick question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Mon, Dec 4, 2017 at 4:39 PM, Drew Weaver <drew.weaver@xxxxxxxxxx> wrote:

Howdy,

 

I replaced a disk today because it was marked as Predicted failure. These were the steps I took

 

ceph osd out osd17

ceph -w #waited for it to get done

systemctl stop ceph-osd@osd17

ceph osd purge osd17 --yes-i-really-mean-it

umount /var/lib/ceph/osd/ceph-osdX

 

I noticed that after I ran the ‘osd out’ command that it started moving data around.


That's normal

 

19446/16764 objects degraded (115.999%) ß I noticed that number seems odd


I don't think that's normal!

 

So then I replaced the disk

Created a new label on it

Ceph-deploy osd prepare OSD5:sdd

 

THIS time, it started rebuilding

 

40795/16764 objects degraded (243.349%) ß Now I’m really concerned.

 

Perhaps I don’t quite understand what the numbers are telling me but is it normal for it to rebuilding more objects than exist?

See: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-September/020682.html, seems to be similar issue to yours.

I'd recommend providing more info, Ceph version, bluestore or filestore, crushmap etc.

 

Thanks,

-Drew

 

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux