Re: [quincy] Migrating ceph cluster to new network, bind OSDs to multple public_nework

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you add some more details? Did you change the mon_host in ceph.conf and then rebooted? So the OSDs do work correctly now within the new network? OSDs do only bind to one public and one cluster IP, I'm not aware of a way to have them bind to multiple public IPs like the MONs can. You'll probably need to route the compute node traffic towards the new network. Please correct me if I misunderstood your response.

Zitat von Boris Behrens <bb@xxxxxxxxx>:

The OSDs are still only bound to one IP address.
After a reboot, the OSDs switched to the new address and are now
unreachable from the compute nodes.



Am Di., 22. Aug. 2023 um 09:17 Uhr schrieb Eugen Block <eblock@xxxxxx>:

You'll need to update the mon_host line as well. Not sure if it makes
sense to have both old and new network in there, but I'd try on one
host first and see if it works.

Zitat von Boris Behrens <bb@xxxxxxxxx>:

> We're working on the migration to cephadm, but it requires some
> prerequisites that still needs planing.
>
> root@host:~# cat /etc/ceph/ceph.conf ; ceph config dump
> [global]
> fsid = ...
> mon_host = [OLD_NETWORK::10], [OLD_NETWORK::11], [OLD_NETWORK::12]
> #public_network = OLD_NETWORK::/64, NEW_NETWORK::/64
> ms_bind_ipv6 = true
> ms_bind_ipv4 = false
> auth_cluster_required = none
> auth_service_required = none
> auth_client_required = none
>
> [client....]
> ms_mon_client_mode = crc
> #debug_rgw = 20
> rgw_frontends = beast endpoint=[OLD_NETWORK::12]:7480
> rgw_region = ...
> rgw_zone = ...
> rgw_thread_pool_size = 512
> rgw_dns_name = ...
> rgw_dns_s3website_name = ...
>
> [mon....-new]
> public_addr = fNEW_NETWORK::12
> public_bind_addr = NEW_NETWORK::12
>
> WHO               MASK  LEVEL     OPTION
> VALUE
>                RO
> global                  advanced  auth_client_required
> none
>                 *
> global                  advanced  auth_cluster_required
>  none
>                 *
> global                  advanced  auth_service_required
>  none
>                 *
> global                  advanced  mon_allow_pool_size_one
>  true
> global                  advanced  ms_bind_ipv4
> false
> global                  advanced  ms_bind_ipv6
> true
> global                  advanced  osd_pool_default_pg_autoscale_mode
> warn
> global                  advanced  public_network
> OLD_NETWORK::/64, NEW_NETWORK::/64
>       *
> mon                     advanced  auth_allow_insecure_global_id_reclaim
>  false
> mon                     advanced  mon_allow_pool_delete
>  false
> mgr                     advanced  mgr/balancer/active
>  true
> mgr                     advanced  mgr/balancer/mode
>  upmap
> mgr                     advanced  mgr/cephadm/migration_current
5
>
>              *
> mgr                     advanced  mgr/orchestrator/orchestrator
>  cephadm
> mgr.0cc47a6df14e        basic     container_image
>
quay.io/ceph/ceph@sha256:09e527353463993f0441ad3e86be98076c89c34552163e558a8c2f9bfb4a35f5
>  *
> mgr.0cc47aad8ce8        basic     container_image
>
quay.io/ceph/ceph@sha256:09e527353463993f0441ad3e86be98076c89c34552163e558a8c2f9bfb4a35f5
>  *
> osd.0                   basic     osd_mclock_max_capacity_iops_ssd
> 13295.404086
> osd.1                   basic     osd_mclock_max_capacity_iops_ssd
> 14952.522452
> osd.2                   basic     osd_mclock_max_capacity_iops_ssd
> 13584.113025
> osd.3                   basic     osd_mclock_max_capacity_iops_ssd
> 16421.770356
> osd.4                   basic     osd_mclock_max_capacity_iops_ssd
> 15209.375302
> osd.5                   basic     osd_mclock_max_capacity_iops_ssd
> 15333.697366
>
> Am Mo., 21. Aug. 2023 um 14:20 Uhr schrieb Eugen Block <eblock@xxxxxx>:
>
>> Hi,
>>
>> > I don't have those configs. The cluster is not maintained via cephadm
/
>> > orchestrator.
>>
>> I just assumed that with Quincy it already would be managed by
>> cephadm. So what does the ceph.conf currently look like on an OSD host
>> (mask sensitive data)?
>>
>> Zitat von Boris Behrens <bb@xxxxxxxxx>:
>>
>> > Hey Eugen,
>> > I don't have those configs. The cluster is not maintained via cephadm
/
>> > orchestrator.
>> > The ceph.conf does not have IPaddresses configured.
>> > A grep in /var/lib/ceph show only binary matches on the mons
>> >
>> > I've restarted the whole host, which also did not work.
>> >
>> > Am Mo., 21. Aug. 2023 um 13:18 Uhr schrieb Eugen Block <eblock@xxxxxx
>:
>> >
>> >> Hi,
>> >>
>> >> there have been a couple of threads wrt network change, simply
>> >> restarting OSDs is not sufficient. I still haven't had to do it
>> >> myself, but did you 'ceph orch reconfig osd' after adding the second
>> >> public network, then restart them? I'm not sure if the orchestrator
>> >> works as expected here, last year there was a thread [1] with the
same
>> >> intention. Can you check the local ceph.conf file
>> >> (/var/lib/ceph/<FSID>/<SERVICE>/config) of the OSDs (or start with
>> >> one) if it contains both public networks? I (still) expect the
>> >> orchestrator to update that config as well. Maybe it's worth a bug
>> >> report? If there's more to it than just updating the monmap I would
>> >> like to see that added to the docs since moving monitors to a
>> >> different network is already documented [2].
>> >>
>> >> Regards,
>> >> Eugen
>> >>
>> >> [1] https://www.spinics.net/lists/ceph-users/msg75162.html
>> >> [2]
>> >>
>> >>
>>
https://docs.ceph.com/en/quincy/cephadm/services/mon/#moving-monitors-to-a-different-network
>> >>
>> >> Zitat von Boris Behrens <bb@xxxxxxxxx>:
>> >>
>> >> > Hi,
>> >> > I need to migrate a storage cluster to a new network.
>> >> >
>> >> > I added the new network to the ceph config via:
>> >> > ceph config set global public_network "old_network/64,
new_network/64"
>> >> > I've added a set of new mon daemons with IP addresses in the new
>> network
>> >> > and they are added to the quorum and seem to work as expected.
>> >> >
>> >> > But when I restart the OSD daemons, the do not bind to the new
>> >> addresses. I
>> >> > would have expected that the OSDs try to bind to all networks but
they
>> >> are
>> >> > only bound to the old_network.
>> >> >
>> >> > The idea was to add the new set of network config to the current
>> storage
>> >> > hosts, bind everything to ip addresses in both networks, shift over
>> >> > workload, and then remove the old network.
>> >> > _______________________________________________
>> >> > ceph-users mailing list -- ceph-users@xxxxxxx
>> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >>
>> >>
>> >> _______________________________________________
>> >> ceph-users mailing list -- ceph-users@xxxxxxx
>> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >>
>> >
>> >
>> > --
>> > Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend
im
>> > groüen Saal.
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
>
> --
> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> groüen Saal.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux