Re: Ceph Octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eugen,

ceph config output shows set network address.

I have not restarted containers directly I was trying the command `ceph
orch restart osd.46` I think that was a problem now after running `ceph
orch daemon restart osd.46` it's showing changes in dashboard.

Thanks.


On Fri, Oct 23, 2020 at 6:14 PM Eugen Block <eblock@xxxxxx> wrote:

> Did you restart the OSD containers? Does ceph config show your changes?
>
> ceph config get mon cluster_network
> ceph config get mon public_network
>
>
>
> Zitat von Amudhan P <amudhan83@xxxxxxxxx>:
>
> > Hi Eugen,
> >
> > I did the same step specified but OSD is not updated cluster address.
> >
> >
> > On Tue, Oct 20, 2020 at 2:52 PM Eugen Block <eblock@xxxxxx> wrote:
> >
> >> > I wonder if this would be impactful, even if  `nodown` were set.
> >> > When a given OSD latches onto
> >> > the new replication network, I would expect it to want to use it for
> >> > heartbeats — but when
> >> > its heartbeat peers aren’t using the replication network yet, they
> >> > won’t be reachable.
> >>
> >> I also expected at least some sort of impact, I just tested it in a
> >> virtual lab environment. But besides the temporary "down" OSDs during
> >> container restart the cluster was always responsive (although there's
> >> no client traffic). I didn't even set "nodown". But all OSDs now have
> >> a new backend address and the cluster seems to be happy.
> >>
> >> Regards,
> >> Eugen
> >>
> >>
> >> Zitat von Anthony D'Atri <anthony.datri@xxxxxxxxx>:
> >>
> >> > I wonder if this would be impactful, even if  `nodown` were set.
> >> > When a given OSD latches onto
> >> > the new replication network, I would expect it to want to use it for
> >> > heartbeats — but when
> >> > its heartbeat peers aren’t using the replication network yet, they
> >> > won’t be reachable.
> >> >
> >> > Unless something has changed since I tried this with Luminous.
> >> >
> >> >> On Oct 20, 2020, at 12:47 AM, Eugen Block <eblock@xxxxxx> wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> a quick search [1] shows this:
> >> >>
> >> >> ---snip---
> >> >> # set new config
> >> >> ceph config set global cluster_network 192.168.1.0/24
> >> >>
> >> >> # let orchestrator reconfigure the daemons
> >> >> ceph orch daemon reconfig mon.host1
> >> >> ceph orch daemon reconfig mon.host2
> >> >> ceph orch daemon reconfig mon.host3
> >> >> ceph orch daemon reconfig osd.1
> >> >> ceph orch daemon reconfig osd.2
> >> >> ceph orch daemon reconfig osd.3
> >> >> ---snip---
> >> >>
> >> >> I haven't tried it myself though.
> >> >>
> >> >> Regards,
> >> >> Eugen
> >> >>
> >> >> [1]
> >> >>
> >>
> https://stackoverflow.com/questions/61763230/configure-a-cluster-network-with-cephadm
> >> >>
> >> >>
> >> >> Zitat von Amudhan P <amudhan83@xxxxxxxxx>:
> >> >>
> >> >>> Hi,
> >> >>>
> >> >>> I have installed Ceph Octopus cluster using cephadm with a single
> >> network
> >> >>> now I want to add a second network and configure it as a cluster
> >> address.
> >> >>>
> >> >>> How do I configure ceph to use second Network as cluster network?.
> >> >>>
> >> >>> Amudhan
> >> >>> _______________________________________________
> >> >>> ceph-users mailing list -- ceph-users@xxxxxxx
> >> >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >> >>
> >> >>
> >> >> _______________________________________________
> >> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
>
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux