Problem in changing monitor address and public_network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 Hello, according to ceph own document and the article that I sent the link
to, I tried to change the address of the ceph machines and its public
network.
But the guarantee that I had to set the machines with the new address(ceph
orch host set-addr opcrgfpsksa0101 10.248.35.213)
, the command was not executed and the managers were trying to connect to
the same previous addresses.
I have run my cluster with a non-root user and i change adress from
10.56.12.0/22 to 10.248.35/24.
i used ceph version 17.2.6 and cephadm.
Here, I don't have a configuration for the cluster_network and all
communications are on the public_network.
Also, I have sent the specifications and logs that I think are useful
Are there any ideas and solutions to fix this problem and the cause of it?


--------------
debug 2024-05-26T17:28:33.191+0000 7ffb15579700  0 log_channel(cephadm) log
[INF] : Filtered out host opcrgfpsksa0101: does not belong to mon
public_network(s):  10.248.35.0/24, host network(s):
10.56.12.0/22,172.17.0.0/16
debug 2024-05-26T17:28:33.191+0000 7ffb15579700  0 [cephadm INFO
cephadm.serve] Filtered out host opcrgfpsksa0103: does not belong to mon
public_network(s):  10.248.35.0/24, host network(s):
10.56.12.0/22,172.17.0.0/16
debug 2024-05-26T17:28:33.191+0000 7ffb15579700  0 log_channel(cephadm) log
[INF] : Filtered out host opcrgfpsksa0103: does not belong to mon
public_network(s):  10.248.35.0/24, host network(s):
10.56.12.0/22,172.17.0.0/16
debug 2024-05-26T17:28:33.192+0000 7ffb15579700  0 [cephadm INFO
cephadm.serve] Filtered out host opcmrfpsksa0101: does not belong to mon
public_network(s):  10.248.35.0/24, host network(s):
10.56.12.0/22,172.17.0.0/16
debug 2024-05-26T17:28:33.192+0000 7ffb15579700  0 log_channel(cephadm) log
[INF] : Filtered out host opcmrfpsksa0101: does not belong to mon
public_network(s):  10.248.35.0/24, host network(s):
10.56.12.0/22,172.17.0.0/16
debug 2024-05-26T17:28:33.192+0000 7ffb15579700  0 [progress WARNING root]
complete: ev da5c20ec-e9df-490f-804d-182d02f0324e does not exist
debug 2024-05-26T17:28:33.192+0000 7ffb15579700  0 [progress WARNING root]
complete: ev 82392bf5-4940-4252-9aaf-aa4758c00ead does not exist
debug 2024-05-26T17:28:33.212+0000 7ffb15579700  0 [progress WARNING root]
complete: ev 1d64774f-dc6d-46c8-8f96-21d147f4b053 does not exist
debug 2024-05-26T17:28:33.212+0000 7ffb15579700  0 [progress WARNING root]
complete: ev 45322c73-8ecb-4471-b41c-0e279805dd0b does not exist
debug 2024-05-26T17:28:35.182+0000 7ffb584e1700  0 log_channel(cluster) log
[DBG] : pgmap v232: 1185 pgs: 1185 unknown; 0 B data, 0 B used, 0 B / 0 B
avail
debug 2024-05-26T17:28:37.183+0000 7ffb584e1700  0 log_channel(cluster) log
[DBG] : pgmap v233: 1185 pgs: 1185 unknown; 0 B data, 0 B used, 0 B / 0 B
avail
debug 2024-05-26T17:28:38.609+0000 7ffb4f4df700  0 log_channel(audit) log
[DBG] : from='client.14874149 -' entity='client.admin' cmd=[{"prefix":
"orch host set-addr", "hostname": "opcrgfpsksa0101", "addr":
"10.248.35.213", "target": ["mon-mgr", ""]}]: dispatch
debug 2024-05-26T17:28:39.184+0000 7ffb584e1700  0 log_channel(cluster) log
[DBG] : pgmap v234: 1185 pgs: 1185 unknown; 0 B data, 0 B used, 0 B / 0 B
avail

------------
ceph orch host ls
HOST             ADDR          LABELS          STATUS
opcmrfpsksa0101  10.56.12.216  _admin mon osd
opcpmfpsksa0101  10.56.12.204  rgw
opcpmfpsksa0103  10.56.12.205  rgw
opcpmfpsksa0105  10.56.12.206  rgw
opcrgfpsksa0101  10.56.12.213  _admin mon osd
opcrgfpsksa0103  10.56.12.214  _admin mon osd
opcsdfpsksa0101  10.56.12.207  osd
opcsdfpsksa0103  10.56.12.208  osd
opcsdfpsksa0105  10.56.12.209  osd
9 hosts in cluster
------------------------
NAME                               PORTS                 RUNNING  REFRESHED
 AGE  PLACEMENT
alertmanager                       ?:9093,9094               3/3  8m ago
  11M  count:3
ceph-exporter                                                9/9  8m ago
  7w   *
crash                                                        9/9  8m ago
  11M  *
grafana                            ?:3000                    3/3  8m ago
  11d  count:3;label:mon
ingress.rgw.k8s                    10.56.12.215:80,1967      4/4  8m ago
  11d  count:2;label:rgw
mds.k8s-cephfs                                               3/3  8m ago
  11M  count:3
mgr                                                          2/2  7m ago
  11d  count:2;label:mon
mon                                                          3/3  8m ago
  7M   count:3;label:mon
node-exporter                      ?:9100                    9/9  8m ago
  7w   *
osd                                                            8  8m ago
  -    <unmanaged>
osd.dashboard-admin-1695638488579                             37  8m ago
  8M   *
prometheus                         ?:9095                    3/3  8m ago
  7w   count:3;label:mon
rgw.k8s                            ?:8080                    3/3  8m ago
  5M   count:3;label:rgw
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux