Re: Moving OSDs between hosts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi jon,

Am 16. März 2018 17:00:09 MEZ schrieb Jon Light <jon@xxxxxxxxxxxx>:
>Hi all,
>
>I have a very small cluster consisting of 1 overloaded OSD node and a
>couple MON/MGR/MDS nodes. I will be adding new OSD nodes to the cluster
>and
>need to move 36 drives from the existing node to a new one. I'm running
>Luminous 12.2.2 on Ubuntu 16.04 and everything was created with
>ceph-deploy.
>
>What is the best course of action for moving these drives? I have read
>some
>posts that suggest I can simply move the drive and once the new OSD
>node
>sees the drive it will update the cluster automatically.

I would give this a try. Had Test this scenario at the beginning of my Cluster (Jewel/ceph deploy/ceph disk) and i was able to remove One osd and put it in Another Node- udev had done his Magic.

- Mehmet

>
>Time isn't a problem and I want to minimize risk so I want to move 1
>OSD at
>a time. I was planning on stopping the OSD, moving it to the new host,
>and
>waiting for the OSD to become up and in and the cluster to be healthy.
>Are
>there any other steps I need to take? Should I do anything different?
>
>Thanks in advance
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux