Re: Cephfs Kernel client not working properly without ceph cluster IP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Janne,

My OSD's have both public IP and Cluster IP configured. The monitor node
and OSD nodes are co-located.

regards
Amudhan P

On Tue, Nov 10, 2020 at 4:45 PM Janne Johansson <icepic.dz@xxxxxxxxx> wrote:

>
>
> Den tis 10 nov. 2020 kl 11:13 skrev Amudhan P <amudhan83@xxxxxxxxx>:
>
>> Hi Nathan,
>>
>> Kernel client should be using only the public IP of the cluster to
>> communicate with OSD's.
>>
>
> "ip of the cluster" is a bit weird way to state it.
>
> A mounting client needs only to talk to ips in the public range yes, but
> OSDs alwaysneed to have an ip in the public range too.
> The private range is only for OSD<->OSD traffic and can be in the private
> network, meaning an OSD which uses both private and public ranges needs two
> ips, one in each range.
>
>
>
>> But here it requires both IP's for mount to work properly.
>>
>> regards
>> Amudhan
>>
>>
>>
>> On Mon, Nov 9, 2020 at 9:51 PM Nathan Fish <lordcirth@xxxxxxxxx> wrote:
>>
>> > It sounds like your client is able to reach the mon but not the OSD?
>> > It needs to be able to reach all mons and all OSDs.
>> >
>> > On Sun, Nov 8, 2020 at 4:29 AM Amudhan P <amudhan83@xxxxxxxxx> wrote:
>> > >
>> > > Hi,
>> > >
>> > > I have mounted my cephfs (ceph octopus) thru kernel client in Debian.
>> > > I get following error in "dmesg" when I try to read any file from my
>> > mount.
>> > > "[  236.429897] libceph: osd1 10.100.4.1:6891 socket closed (con
>> state
>> > > CONNECTING)"
>> > >
>> > > I use public IP (10.100.3.1) and cluster IP (10.100.4.1) in my ceph
>> > > cluster. I think public IP is enough to mount the share and work on it
>> > but
>> > > in my case, it needs me to assign public IP also to the client to work
>> > > properly.
>> > >
>> > > Does anyone have experience in this?
>> > >
>> > > I have earlier also mailed the ceph-user group but I didn't get any
>> > > response. So sending again not sure my mail went through.
>> > >
>> > > regards
>> > > Amudhan
>> > > _______________________________________________
>> > > ceph-users mailing list -- ceph-users@xxxxxxx
>> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> >
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
>
> --
> May the most significant bit of your life be positive.
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux