Move osd disks between hosts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Sage,

Yes, original deployment was done via ceph-deploy - and I am very happy to read this :)

Thank you!
Dinu


On May 14, 2014, at 4:17 PM, Sage Weil <sage at inktank.com> wrote:

> Hi Dinu,
> 
> On Wed, 14 May 2014, Dinu Vlad wrote:
>> 
>> I'm running a ceph cluster with 3 mon and 4 osd nodes (32 disks total) and I've been looking at the possibility to "migrate" the data to 2 new nodes. The operation should happen by relocating the disks - I'm not getting any new hard-drives. The cluster is used as a backend for an openstack cloud, so downtime should be as short as possible - preferably not more than 24 h during the week-end.
>> 
>> I'd like a second opinion on the process - since I do not have the resources to test the move scenario. I'm running emperor (0.72.1) at the moment. All pools in the cluster have size 2. Each existing OSD nodes have each an SSD for journals; /dev/disk/by-id paths were used. 
>> 
>> Here's what I think would work:
>> 1 - stop ceph on the existing OSD nodes (all of them) and shutdown the node 1 & 2;
>> 2 - take drives 1-16/ssds 1-2 out and put them in the new node #1; start it up with ceph's upstart script set on manual and check/correct journal paths 
>> 3 - edit the CRUSH map on the monitors to reflect the new situation
>> 4 - start ceph on the new node #1 and old nodes 3 & 4; wait for the rebuild to happen
>> 5 - repeat steps 1-4 for the rest of the nodes/drives;
> 
> If you used ceph-deploy and/or ceph-disk to set up these OSDs (that is, if 
> they are stored on labeled GPT partitions such that upstart is 
> automagically starting up the ceph-osd daemons for you without you putting 
> anythign in /etc/fstab to manually mount the volumes) then all of this 
> should be plug and play for you--including step #3.  By default, the 
> startup process will 'fix' the CRUSH hierarchy position based on the 
> hostname and (if present) other positional data configured for 'crush 
> location' in ceph.conf.  The only real requirement is that both the osd 
> data and journal volumes get moved so that the daemon has everything it 
> needs to start up.
> 
> sage
> 
> 
>> 
>> Any opinions? Or a better path to follow? 
>> 
>> Thanks!
>> 
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
>> 



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux