Re: deploying Ceph using FQDN for MON / MDS Services

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Robert and Team,



Thank you for the help. We had previously referred to the link:
https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/
But we were not able to configure mon_dns_srv_name correctly.



We find the following link:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/configuration_guide/ceph-monitor-configuration



Which gives just a little more information about the DNS lookup.



After following the link, we updated the ceph.conf as follows:
```
[root@storagenode3 ~]# cat /etc/ceph/ceph.conf
[global]
ms bind ipv6 = true
ms bind ipv4 = false
mon initial members = storagenode1,storagenode2,storagenode3
osd pool default crush rule = -1
mon dns srv name = ceph-mon
fsid = ce479912-a277-45b6-87b1-203d3e43d776
public network = abcd:abcd:abcd::/64
cluster network = eff0:eff0:eff0::/64



[osd]
osd memory target = 4294967296



[client.rgw.storagenode3.rgw0]
host = storagenode3
keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode3.rgw0/keyring
log file = /var/log/ceph/ceph-rgw-storagenode3.rgw0.log
rgw frontends = beast endpoint=[abcd:abcd:abcd::23]:8080
rgw thread pool size = 512



[root@storagenode3 ~]#
```

 We also updated the dns server as follows:

```
storagenode1.storage.com  IN  AAAA  abcd:abcd:abcd::21
storagenode2.storage.com  IN  AAAA  abcd:abcd:abcd::22
storagenode3.storage.com  IN  AAAA  abcd:abcd:abcd::23



_ceph-mon._tcp.storage.com 60 IN SRV 10 60 6789 storagenode1.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 6789 storagenode2.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 6789 storagenode3.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 3300 storagenode1.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 3300 storagenode2.storage.com
_ceph-mon._tcp.storage.com 60 IN SRV 10 60 3300 storagenode3.storage.com


```

But when we run the command ceph -s, we get the following error:

```
[root@storagenode3 ~]# ceph -s
unable to get monitor info from DNS SRV with service name: ceph-mon
2023-02-02T15:18:14.098+0530 7f1313a58700 -1 failed for service
_ceph-mon._tcp
2023-02-02T15:18:14.098+0530 7f1313a58700 -1 monclient:
get_monmap_and_config cannot identify monitors to contact
[errno 2] RADOS object not found (error connecting to the cluster)
[root@storagenode3 ~]#
```

 Could you please help us to configure the server using mon_dns_srv_name
correctly?



On Wed, Jan 25, 2023 at 9:06 PM John Mulligan <phlogistonjohn@xxxxxxxxxxxxx>
wrote:

> On Tuesday, January 24, 2023 9:02:41 AM EST Lokendra Rathour wrote:
> > Hi Team,
> >
> >
> >
> > We have a ceph cluster with 3 storage nodes:
> >
> > 1. storagenode1 - abcd:abcd:abcd::21
> >
> > 2. storagenode2 - abcd:abcd:abcd::22
> >
> > 3. storagenode3 - abcd:abcd:abcd::23
> >
> >
> >
> > The requirement is to mount ceph using the domain name of MON node:
> >
> > Note: we resolved the domain name via DNS server.
> >
> >
> > For this we are using the command:
> >
> > ```
> >
> > mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
> > name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
> >
> > ```
> >
> >
> >
> > We are getting the following logs in /var/log/messages:
> >
> > ```
> >
> > Jan 24 17:23:17 localhost kernel: libceph: resolve '
> storagenode.storage.com'
> > (ret=-3): failed
> >
> > Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
> > storagenode.storage.com:6789'
> >
> > ```
> >
>
>
> I saw a similar log message recently when I had forgotten to install the
> ceph
> mount helper.
> Check to see if you have a binary 'mount.ceph' on the system. If you don't
> try
> to install it from packages. On fedora I needed to install 'ceph-common'.
>
>
> >
> >
> > We also tried mounting ceph storage using IP of MON which is working
> fine.
> >
> >
> >
> > Query:
> >
> >
> > Could you please help us out with how we can mount ceph using FQDN.
> >
> >
> >
> > My /etc/ceph/ceph.conf is as follows:
> >
> > [global]
> >
> > ms bind ipv6 = true
> >
> > ms bind ipv4 = false
> >
> > mon initial members = storagenode1,storagenode2,storagenode3
> >
> > osd pool default crush rule = -1
> >
> > fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
> >
> > mon host =
> >
> [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:a
> >
> bcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1
> > :[abcd:abcd:abcd::23]:6789]
> >
> > public network = abcd:abcd:abcd::/64
> >
> > cluster network = eff0:eff0:eff0::/64
> >
> >
> >
> > [osd]
> >
> > osd memory target = 4294967296
> >
> >
> >
> > [client.rgw.storagenode1.rgw0]
> >
> > host = storagenode1
> >
> > keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring
> >
> > log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log
> >
> > rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080
> >
> > rgw thread pool size = 512
>
>
>
>
>

-- 
~ Lokendra
skype: lokendrarathour
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux