Re: Replacing a failed OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jim,

This is pretty fresh in my mind so hopefully I can help you out here.

Firstly, the crush map will back fill any holes in the enumeration that are existing. So assuming only one drive has been removed from the crush map, it will repopulate the same OSD number.

My steps for removing an OSD are run from the host node:

> ceph osd down osd.i
> ceph osd out osd.i
> stop ceph-osd id=i
> umount /var/lib/ceph/osd/ceph-i
> ceph osd crush remove osd.i
> ceph auth del osd.i
> ceph osd rm osd.i


>From here, the disk is removed from the ceph cluster, crush map, and is ready for removal and replacement.

>From there I deploy the new osd with ceph-deploy from my admin node using:

> ceph-deploy disk list nodei
> ceph-deploy disk zap nodei:sdX
> ceph-deploy --overwrite-conf osd prepare nodei:sdX


This will prepare the disk and insert it back into the crush map, bringing it back up and in. The OSD number should remain the same, as it will fill the gap left from the previous OSD removal.

Hopefully this helps,

Reed

> On Sep 14, 2016, at 11:00 AM, Jim Kilborn <jim@xxxxxxxxxxxx> wrote:
> 
> I am finishing testing our new cephfs cluster and wanted to document a failed osd procedure.
> I noticed that when I pulled a drive, to simulate a failure, and run through the replacement steps, the osd has to be removed from the crushmap in order to initialize the new drive as the same osd number.
> 
> Is this correct that I have to remove it from the crushmap, then after the osd is initialized, and mounted, add it back to the crush map? Is there no way to have it reuse the same osd # without removing if from the crush map?
> 
> Thanks for taking the time….
> 
> 
> -          Jim
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux