Re: Do I need to update ceph.conf and restart each OSD after adding more MONs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you Stefan and Josh!
Tony
________________________________________
From: Josh Baergen <jbaergen@xxxxxxxxxxxxxxxx>
Sent: March 28, 2021 08:28 PM
To: Tony Liu
Cc: ceph-users@xxxxxxx
Subject: Re:  Re: Do I need to update ceph.conf and restart each OSD after adding more MONs?

As was mentioned in this thread, all of the mon clients (OSDs included) learn about other mons through monmaps, which are distributed when mon membership and election changes. Thus, your OSDs should already know about the new mons.

mon_host indicates the list of mons that mon clients should try to contact at boot. Thus, it's important to have correct in the config but doesn't need to be updated after the process starts.

At least that's how I understand it; the config docs aren't terribly clear on this behaviour.

Josh


On Sat., Mar. 27, 2021, 2:07 p.m. Tony Liu, <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>> wrote:
Just realized that all config files (/var/lib/ceph/<cluster id>/<service>/config)
on all nodes are already updated properly. It must be handled as part of adding
MONs. But "ceph config show" shows only single host.

mon_host                       [v2:10.250.50.80:3300/0,v1:10.250.50.80:6789/0<http://10.250.50.80:3300/0,v1:10.250.50.80:6789/0>]  file

That means I still need to restart all services to apply the update, right?
Is this supposed to be part of adding MONs as well, or additional manual step?


Thanks!
Tony
________________________________________
From: Tony Liu <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>>
Sent: March 27, 2021 12:53 PM
To: Stefan Kooman; ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
Subject:  Re: Do I need to update ceph.conf and restart each OSD after adding more MONs?

# ceph config set osd.0 mon_host [v2:10.250.50.80:3300/0,v1:10.250.50.80:6789/0,v2:10.250.50.81:3300/0,v1:10.250.50.81:6789/0,v2:10.250.50.82:3300/0,v1:10.250.50.82:6789/0<http://10.250.50.80:3300/0,v1:10.250.50.80:6789/0,v2:10.250.50.81:3300/0,v1:10.250.50.81:6789/0,v2:10.250.50.82:3300/0,v1:10.250.50.82:6789/0>]
Error EINVAL: mon_host is special and cannot be stored by the mon

It seems that the only option is to update ceph.conf and restart service.


Tony
________________________________________
From: Tony Liu <tonyliu0592@xxxxxxxxxxx<mailto:tonyliu0592@xxxxxxxxxxx>>
Sent: March 27, 2021 12:20 PM
To: Stefan Kooman; ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
Subject:  Re: Do I need to update ceph.conf and restart each OSD after adding more MONs?

I expanded MON from 1 to 3 by updating orch service "ceph orch apply".
"mon_host" in all services (MON, MGR, OSDs) is not updated. It's still single
host from source "file".
What's the guidance here to update "mon_host" for all services? I am talking
about Ceph services, not client side.
Should I update ceph.conf for all services and restart all of them?
Or I can update it on-the-fly by "ceph config set"?
In the latter case, where the updated configuration is stored? Is it going to
be overridden by ceph.conf when restart service?


Thanks!
Tony

________________________________________
From: Stefan Kooman <stefan@xxxxxx<mailto:stefan@xxxxxx>>
Sent: March 26, 2021 12:22 PM
To: Tony Liu; ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
Subject: Re:  Do I need to update ceph.conf and restart each OSD after adding more MONs?

On 3/26/21 6:06 PM, Tony Liu wrote:
> Hi,
>
> Do I need to update ceph.conf and restart each OSD after adding more MONs?

This should not be necessary, as the OSDs should learn about these
changes through monmaps. Updating the ceph.conf after the mons have been
updated is advised.

> This is with 15.2.8 deployed by cephadm.
>
> When adding MON, "mon_host" should be updated accordingly.
> Given [1], is that update "the monitor cluster’s centralized configuration
> database" or "runtime overrides set by an administrator"?

No need to put that in the centralized config database. I *think* they
mean ceph.conf file on the clients and hosts. At least, that's what you
would normally do (if not using DNS).

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux