Move osd disks between hosts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm running a ceph cluster with 3 mon and 4 osd nodes (32 disks total) and I've been looking at the possibility to "migrate" the data to 2 new nodes. The operation should happen by relocating the disks - I'm not getting any new hard-drives. The cluster is used as a backend for an openstack cloud, so downtime should be as short as possible - preferably not more than 24 h during the week-end.

I'd like a second opinion on the process - since I do not have the resources to test the move scenario. I'm running emperor (0.72.1) at the moment. All pools in the cluster have size 2. Each existing OSD nodes have each an SSD for journals; /dev/disk/by-id paths were used. 

Here's what I think would work:
1 - stop ceph on the existing OSD nodes (all of them) and shutdown the node 1 & 2;
2 - take drives 1-16/ssds 1-2 out and put them in the new node #1; start it up with ceph's upstart script set on manual and check/correct journal paths 
3 - edit the CRUSH map on the monitors to reflect the new situation
4 - start ceph on the new node #1 and old nodes 3 & 4; wait for the rebuild to happen
5 - repeat steps 1-4 for the rest of the nodes/drives;

Any opinions? Or a better path to follow? 

Thanks!








[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux