Re: Procedure for changing IP and domain name of all nodes of a cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

On Wednesday, July 21st, 2021 at 9:53 AM, Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
> You need to ensure that TCP traffic is routeable between the networks
>
> for the migration. OSD only hosts are trivial, an OSD updates its IP
>
> information in the OSD map on boot. This should also be the case for
>
> MDS, MGR, RGW, mirror agents and other services.

Unfortunately this is not possible as the internal networks are located at two different physical locations.

> MONs are a different beast. You cannot change the IP address of an
>
> existing mon (or with way too much effort). I would thus recommend to
>
> remove one mon, migrate the machine to the new network, and re-add the
>
> mon afterwards. Repeat with the other mons etc.

In my case I would power off all nodes at the same time, move them to the new location and power them on all at the same time again. Should I better power on one mon node at a time, adapt the ceph.conf config and the go on to the next mon node?

> You will also need to adapt ceph.conf on each host (servers and clients)
>
> to the new IP addresses or update the DNS configuration if you use SRV
>
> entries. Running instances will be updated automatically (notification
>
> of changes in the mon map), but newly started clients/services might
>
> fail if they try to use the old IP addresses. This is why ceph.conf is
>
> create by puppet in our setup......

Where exactly is the ceph.conf file located? I had a look on a few nodes and there are no /etc/ceph/ceph.conf file available.

Note that I am using cephadm so maybe the ceph.conf is located somewhere else?

> One client with special needs is openstack cinder. The database entries
>
> contain the mon list for volumes, and I'm not aware of a good method to
>
> update them except manipulating the database. If you also use ceph for
>
> nova (root disks / ephemerals), the problem might also be present there.
>
> libvirt also has a list of mon hosts in the domain description for each
>
> instance that is using ceph based block storage. I'm not 100% sure
>
> whether this is updated e.g. during a live migration. Or what will
>
> happen if you shutdown an instance and restart it after the migration
>
> (domain xml file will not be recreated and might contain stale
>
> information...).

Lucky me, I am not using openstack but simply mounting cephfs on a few clients. I am not using any RGWs or block devices.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux