Re: deploying Ceph using FQDN for MON / MDS Services

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

This by the reason of DNS. Something from userland should be provide IP addresses for kernel


k
Sent from my iPhone

> On 17 Apr 2023, at 05:56, Lokendra Rathour <lokendrarathour@xxxxxxxxx> wrote:
> 
> Hi Team,
> The mount at the client side should be independent of Ceph, but here in
> this case of DNS SRV-based mount, we see that the Ceph common utility is
> needed.
> What can be the reason for the same, any inputs in this direction would be
> helpful.
> 
> Best Regards,
> Lokendra
> 
> 
>> On Sun, Apr 16, 2023 at 10:11 AM Lokendra Rathour <lokendrarathour@xxxxxxxxx>
>> wrote:
>> 
>> Hi .
>> Any input will be of great help.
>> Thanks once again.
>> Lokendra
>> 
>> On Fri, 14 Apr, 2023, 3:47 pm Lokendra Rathour, <lokendrarathour@xxxxxxxxx>
>> wrote:
>> 
>>> Hi Team,
>>> their is one additional observation.
>>> Mount as the client is working fine from one of the Ceph nodes.
>>> Command *: sudo mount -t ceph :/ /mnt/imgs  -o
>>> name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwdfULnx6qX/VDA== *
>>> 
>>> *we are not passing the Monitor address, instead, DNS SRV is configured
>>> as per:*
>>> https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/
>>> 
>>> mount works fine in this case.
>>> 
>>> ####
>>> 
>>> But if we try to mount from the other Location i.e from another
>>> VM/client(non-Ceph Node)
>>> we are getting the error :
>>>  mount -t  ceph :/ /mnt/imgs  -o
>>> name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwULnx6qX/VDA== -v
>>> *mount: /mnt/image: mount point does not exist.*
>>> 
>>> the document says that if we do not pass the monitor address, it tries
>>> discovering the monitor address from DNS Servers, but in actual it is not
>>> happening.
>>> 
>>> 
>>> 
>>> On Tue, Apr 11, 2023 at 6:48 PM Lokendra Rathour <
>>> lokendrarathour@xxxxxxxxx> wrote:
>>> 
>>>> Ceph version Quincy.
>>>> 
>>>> But now I am able to resolve the issue.
>>>> 
>>>> During mount i will not pass any monitor details, it will be
>>>> auto-discovered via SRV.
>>>> 
>>>> On Tue, Apr 11, 2023 at 6:09 PM Eugen Block <eblock@xxxxxx> wrote:
>>>> 
>>>>> What ceph version is this? Could it be this bug [1]? Although the
>>>>> error message is different, not sure if it could be the same issue,
>>>>> and I don't have anything to test ipv6 with.
>>>>> 
>>>>> [1] https://tracker.ceph.com/issues/47300
>>>>> 
>>>>> Zitat von Lokendra Rathour <lokendrarathour@xxxxxxxxx>:
>>>>> 
>>>>>> Hi All,
>>>>>> Requesting any inputs around the issue raised.
>>>>>> 
>>>>>> Best Regards,
>>>>>> Lokendra
>>>>>> 
>>>>>> On Tue, 24 Jan, 2023, 7:32 pm Lokendra Rathour, <
>>>>> lokendrarathour@xxxxxxxxx>
>>>>>> wrote:
>>>>>> 
>>>>>>> Hi Team,
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> We have a ceph cluster with 3 storage nodes:
>>>>>>> 
>>>>>>> 1. storagenode1 - abcd:abcd:abcd::21
>>>>>>> 
>>>>>>> 2. storagenode2 - abcd:abcd:abcd::22
>>>>>>> 
>>>>>>> 3. storagenode3 - abcd:abcd:abcd::23
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> The requirement is to mount ceph using the domain name of MON node:
>>>>>>> 
>>>>>>> Note: we resolved the domain name via DNS server.
>>>>>>> 
>>>>>>> 
>>>>>>> For this we are using the command:
>>>>>>> 
>>>>>>> ```
>>>>>>> 
>>>>>>> mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
>>>>>>> name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
>>>>>>> 
>>>>>>> ```
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> We are getting the following logs in /var/log/messages:
>>>>>>> 
>>>>>>> ```
>>>>>>> 
>>>>>>> Jan 24 17:23:17 localhost kernel: libceph: resolve '
>>>>>>> storagenode.storage.com' (ret=-3): failed
>>>>>>> 
>>>>>>> Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
>>>>>>> storagenode.storage.com:6789'
>>>>>>> 
>>>>>>> ```
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> We also tried mounting ceph storage using IP of MON which is working
>>>>> fine.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Query:
>>>>>>> 
>>>>>>> 
>>>>>>> Could you please help us out with how we can mount ceph using FQDN.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> My /etc/ceph/ceph.conf is as follows:
>>>>>>> 
>>>>>>> [global]
>>>>>>> 
>>>>>>> ms bind ipv6 = true
>>>>>>> 
>>>>>>> ms bind ipv4 = false
>>>>>>> 
>>>>>>> mon initial members = storagenode1,storagenode2,storagenode3
>>>>>>> 
>>>>>>> osd pool default crush rule = -1
>>>>>>> 
>>>>>>> fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
>>>>>>> 
>>>>>>> mon host =
>>>>>>> 
>>>>> [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:abcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1:[abcd:abcd:abcd::23]:6789]
>>>>>>> 
>>>>>>> public network = abcd:abcd:abcd::/64
>>>>>>> 
>>>>>>> cluster network = eff0:eff0:eff0::/64
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> [osd]
>>>>>>> 
>>>>>>> osd memory target = 4294967296
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> [client.rgw.storagenode1.rgw0]
>>>>>>> 
>>>>>>> host = storagenode1
>>>>>>> 
>>>>>>> keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring
>>>>>>> 
>>>>>>> log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log
>>>>>>> 
>>>>>>> rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080
>>>>>>> 
>>>>>>> rgw thread pool size = 512
>>>>>>> 
>>>>>>> --
>>>>>>> ~ Lokendra
>>>>>>> skype: lokendrarathour
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> _______________________________________________
>>>>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>>>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>>>> 
>>>>> _______________________________________________
>>>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>>>> 
>>>> 
>>>> 
>>>> --
>>>> ~ Lokendra
>>>> skype: lokendrarathour
>>>> 
>>>> 
>>>> 
>>> 
>>> --
>>> ~ Lokendra
>>> skype: lokendrarathour
>>> 
>>> 
>>> 
> 
> -- 
> ~ Lokendra
> skype: lokendrarathour
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux