Re: Correct proceedure for removing ceph nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 04, 09:52, Sage Weil Wrote
> > IOW, something like the "clone failing disk" feature of some hardware
> > raids or LVM's pvmove. The idea is to first mirror all data on two
> > disks/PVs/OSDs and to kick out one only after mirroring is complete.
> 
> You basically get this by marking the osd 'out' but not 'down', e.g.,
> 
>  $ ceph osd out 23   # mark out osd23
> 
> The data on osd23 isn't removed until the pg is fully replicated/migrated 
> to it's new location.

Wow, that's awesome!

Thanks
Andre
-- 
The only person who always got his work done by Friday was Robinson Crusoe

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux