Just moving the OSD is indeed the right thing to do and the crush map will update when the OSDs start up on the new host. The only "gotcha" is if you do not have your journals/WAL/DBs on the same device as your data. In that case, you will need to move both devices to the new server for the OSD to start. Without them, the OSD will simply fail to start and you can go back and move the second device without any problems, just a little more time that the disk is moved.
Please note that moving the disks will change the crush map, which means that the algorithm used to place data on OSDs will recalculate where your data goes. You will have a lot of data movement after doing this even though you have the same amount of disks.
On Fri, Mar 16, 2018 at 7:23 PM <ceph@xxxxxxxxxx> wrote:
Hi jon,
Am 16. März 2018 17:00:09 MEZ schrieb Jon Light <jon@xxxxxxxxxxxx>:
>Hi all,
>
>I have a very small cluster consisting of 1 overloaded OSD node and a
>couple MON/MGR/MDS nodes. I will be adding new OSD nodes to the cluster
>and
>need to move 36 drives from the existing node to a new one. I'm running
>Luminous 12.2.2 on Ubuntu 16.04 and everything was created with
>ceph-deploy.
>
>What is the best course of action for moving these drives? I have read
>some
>posts that suggest I can simply move the drive and once the new OSD
>node
>sees the drive it will update the cluster automatically.
I would give this a try. Had Test this scenario at the beginning of my Cluster (Jewel/ceph deploy/ceph disk) and i was able to remove One osd and put it in Another Node- udev had done his Magic.
- Mehmet
>
>Time isn't a problem and I want to minimize risk so I want to move 1
>OSD at
>a time. I was planning on stopping the OSD, moving it to the new host,
>and
>waiting for the OSD to become up and in and the cluster to be healthy.
>Are
>there any other steps I need to take? Should I do anything different?
>
>Thanks in advance
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com