Re: Correct proceedure for removing ceph nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 4 Jun 2010, Andre Noll wrote:
> On Wed, Jun 02, 11:30, Sage Weil Wrote
> > > 3. OSD:
> > >    Again, I suppose we could just kill the daemon, but that'd leave
> > > holes in the data placement which doesn't seem to be very elegant.
> > > Setting the device weight to 0 in the crushmap works, but trying to
> > > remove a device entriely produces strange results. Could you shed some
> > > light on this?
> > 
> > There are a few ways to go about it.  Simply marking the osd 'out' ('ceph 
> > osd out #') will work, but may not be optimal depending on how the crush 
> > map is set up.  The default crush maps use the 'straw' bucket type 
> > everywhere, which deals with addition/removal optimally, so taking the 
> > additional step of removing the item from the crush map will keep things 
> > tidy and erase all trace of the osd.
> 
> BTW: It would be nice to be able to _replace_ an OSD without having
> a time window during which there is less redundancy.
> 
> IOW, something like the "clone failing disk" feature of some hardware
> raids or LVM's pvmove. The idea is to first mirror all data on two
> disks/PVs/OSDs and to kick out one only after mirroring is complete.

You basically get this by marking the osd 'out' but not 'down', e.g.,

 $ ceph osd out 23   # mark out osd23

The data on osd23 isn't removed until the pg is fully replicated/migrated 
to it's new location.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux