Replacing a failed OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am finishing testing our new cephfs cluster and wanted to document a failed osd procedure.
I noticed that when I pulled a drive, to simulate a failure, and run through the replacement steps, the osd has to be removed from the crushmap in order to initialize the new drive as the same osd number.

Is this correct that I have to remove it from the crushmap, then after the osd is initialized, and mounted, add it back to the crush map? Is there no way to have it reuse the same osd # without removing if from the crush map?

Thanks for taking the time….


-          Jim

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux