Re: Howto upgrade AND change distro

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think it will depend on how you have your OSDs deployed currently.

If they are bluestore deployed via ceph-volume using LVM, then it should mostly be pretty painless to migrate them to a new host, assuming everything is on the OSDs.

The corner case would be if the WAL/DB is on a separate block device or something like that, but in that case, as long as the ownership on the block devices is correct.

Copy the ceph keys, the ceph.conf, and the keyring file for the OSDs, and the requisite ceph-osd packages, you should be good to go.

ceph-volume lvm activate --all

Should scan lvm devices for OSDs, then create systemd targets for them, and then start them up.

As was mentioned, set OSD flags for noout, norebalance, norecover before taking the node down, etc.
Then once you activate the OSDs on the new host, make sure the crush topology looks as you expect, in case a host name changes/conflicts and it puts the "new" host in a weird crush location.

Then unset the flags, and it shouldn't move too much data, if any.

May want to try labbing this process, just to be sure, but I seem to recall this working when I lost the OS disk in an OSD node previously.

Reed

> On Aug 27, 2021, at 10:16 AM, Francois Legrand <fleg@xxxxxxxxxxxxxx> wrote:
> 
> Hello,
> 
> We are running a ceph nautilus cluster under centos 7. To upgrade to pacific we need to change to a more recent distro (probably debian or ubuntu because of the recent announcement about centos 8, but the distro doesn't matter very much).
> 
> However, I could'nt find a clear procedure to upgrade ceph AND the distro !  As we have more than 100 osds and ~600TB of data, we would like to avoid as far as possible to wipe the disks and rebuild/rebalance. It seems to be possible to reinstall a server and reuse the osds, but the exact procedure remains quite unclear to me.
> 
> What is the best way to proceed ? Does someone have done that and have a rather detailed doc on how to proceed ?
> 
> Thanks for your help !
> 
> F.
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux