Re: Cephfs Kernel client not working properly without ceph cluster IP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eugen,

The issue looks fixed now in my kernel client mount works fine without
cluster IP.

I have re-run "ceph config set osd cluster_network 10.100.4.0/24" and
restarted all service. Eearlier it was run with "ceph config set global
cluster_network 10.100.4.0/24".

I have run the command output you have asked and output is after
applying all the changes above said.
# ceph config get mon cluster_network
ouput :
#ceph config get mon public_network
output : 10.100.3.0/24

Still testing more on this to confirm the issue and playing out with my
ceph cluster.

regards
Amudhan P

On Wed, Nov 11, 2020 at 2:14 PM Eugen Block <eblock@xxxxxx> wrote:

> > Do you find any issue in the below commands I have used to set cluster IP
> > in cluster.
>
> Yes I do:
>
> > ### adding public IP for ceph cluster ###
> > ceph config set global cluster_network 10.100.4.0/24
>
> I'm still not convinced that your setup is as you want it to be.
> Can you share your actual config?
>
> ceph config get mon cluster_network
> ceph config get mon public_network
>
>
>
> Zitat von Amudhan P <amudhan83@xxxxxxxxx>:
>
> > Hi Eugen,
> >
> > I have only added my Public IP and relevant hostname to hosts file.
> >
> > Do you find any issue in the below commands I have used to set cluster IP
> > in cluster.
> >
> > ### adding public IP for ceph cluster ###
> > ceph config set global cluster_network 10.100.4.0/24
> >
> > ceph orch daemon reconfig mon.host1
> > ceph orch daemon reconfig mon.host2
> > ceph orch daemon reconfig mon.host3
> > ceph orch daemon reconfig osd.1
> > ceph orch daemon reconfig osd.2
> > ceph orch daemon reconfig osd.3
> >
> > restarting all daemons.
> >
> > regards
> > Amudhan
> >
> > On Tue, Nov 10, 2020 at 7:42 PM Eugen Block <eblock@xxxxxx> wrote:
> >
> >> Could it be possible that you have some misconfiguration in the name
> >> resolution and IP mapping? I've never heard or experienced that a
> >> client requires a cluster address, that would make the whole concept
> >> of separate networks obsolete which is hard to believe, to be honest.
> >> I would recommend to double-check your setup.
> >>
> >>
> >> Zitat von Amudhan P <amudhan83@xxxxxxxxx>:
> >>
> >> > Hi Nathan,
> >> >
> >> > Kernel client should be using only the public IP of the cluster to
> >> > communicate with OSD's.
> >> >
> >> > But here it requires both IP's for mount to work properly.
> >> >
> >> > regards
> >> > Amudhan
> >> >
> >> >
> >> >
> >> > On Mon, Nov 9, 2020 at 9:51 PM Nathan Fish <lordcirth@xxxxxxxxx>
> wrote:
> >> >
> >> >> It sounds like your client is able to reach the mon but not the OSD?
> >> >> It needs to be able to reach all mons and all OSDs.
> >> >>
> >> >> On Sun, Nov 8, 2020 at 4:29 AM Amudhan P <amudhan83@xxxxxxxxx>
> wrote:
> >> >> >
> >> >> > Hi,
> >> >> >
> >> >> > I have mounted my cephfs (ceph octopus) thru kernel client in
> Debian.
> >> >> > I get following error in "dmesg" when I try to read any file from
> my
> >> >> mount.
> >> >> > "[  236.429897] libceph: osd1 10.100.4.1:6891 socket closed (con
> >> state
> >> >> > CONNECTING)"
> >> >> >
> >> >> > I use public IP (10.100.3.1) and cluster IP (10.100.4.1) in my ceph
> >> >> > cluster. I think public IP is enough to mount the share and work
> on it
> >> >> but
> >> >> > in my case, it needs me to assign public IP also to the client to
> work
> >> >> > properly.
> >> >> >
> >> >> > Does anyone have experience in this?
> >> >> >
> >> >> > I have earlier also mailed the ceph-user group but I didn't get any
> >> >> > response. So sending again not sure my mail went through.
> >> >> >
> >> >> > regards
> >> >> > Amudhan
> >> >> > _______________________________________________
> >> >> > ceph-users mailing list -- ceph-users@xxxxxxx
> >> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >> >>
> >> > _______________________________________________
> >> > ceph-users mailing list -- ceph-users@xxxxxxx
> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
>
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux