Re: Correct proceedure for removing ceph nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 02, 11:30, Sage Weil Wrote
> > 3. OSD:
> >    Again, I suppose we could just kill the daemon, but that'd leave
> > holes in the data placement which doesn't seem to be very elegant.
> > Setting the device weight to 0 in the crushmap works, but trying to
> > remove a device entriely produces strange results. Could you shed some
> > light on this?
> 
> There are a few ways to go about it.  Simply marking the osd 'out' ('ceph 
> osd out #') will work, but may not be optimal depending on how the crush 
> map is set up.  The default crush maps use the 'straw' bucket type 
> everywhere, which deals with addition/removal optimally, so taking the 
> additional step of removing the item from the crush map will keep things 
> tidy and erase all trace of the osd.

BTW: It would be nice to be able to _replace_ an OSD without having
a time window during which there is less redundancy.

IOW, something like the "clone failing disk" feature of some hardware
raids or LVM's pvmove. The idea is to first mirror all data on two
disks/PVs/OSDs and to kick out one only after mirroring is complete.

Andre
-- 
The only person who always got his work done by Friday was Robinson Crusoe

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux