Mount ceph using FQDN

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi team,

We have a ceph cluster with 3 storage nodes:
1. storagenode1 - abcd:abcd:abcd::21
2. storagenode2 - abcd:abcd:abcd::22
3. storagenode3 - abcd:abcd:abcd::23

We have a dns server with ip abcd:abcd:abcd::31 which resolves the above ip's with a single hostname.
The resolution is as follows:
```
$TTL 1D
@       IN SOA  storage.com       root (
                                                        6       ; serial
                                                        1D      ; refresh
                                                        1H      ; retry
                                                        1W      ; expire
                                                        3H )    ; minimum

                      IN    NS      master
master           IN    A    10.0.1.31
storagenode  IN  AAAA  abcd:abcd:abcd::21
storagenode  IN  AAAA  abcd:abcd:abcd::22
storagenode  IN  AAAA  abcd:abcd:abcd::23
```

We want to mount the ceph storage on a node using this hostname.
For this we are using the command:
```
mount -t ceph [storagenode.storage.com]:6789:/  /backup -o name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
```

We are getting the following logs in /var/log/messages:
```
Jan 24 17:23:17 localhost kernel: libceph: resolve 'storagenode.storage.com' (ret=-3): failed
Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip 'storagenode.storage.com:6789'
```

We also tried mounting ceph storage by removing the dns server and resolving the ip as follows:
```
abcd:abcd:abcd::21 storagenode1
```

But we are getting similar results.

Also kindly note that we are able to perform the mount operation if we use ips instead of domain name.

Could you please help us out with how we can mount ceph using FQDN.

Kindly let me know if any other imformation is required.

My ceph.conf configuration is as follows:
```
[global]
ms bind ipv6 = true
ms bind ipv4 = false
mon initial members = storagenode1,storagenode2,storagenode3
osd pool default crush rule = -1
fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
mon host = [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:abcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1:[abcd:abcd:abcd::23]:6789]
public network = abcd:abcd:abcd::/64
cluster network = eff0:eff0:eff0::/64

[osd]
osd memory target = 4294967296

[client.rgw.storagenode1.rgw0]
host = storagenode1
keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring
log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log
rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080
rgw thread pool size = 512
```

Thanks and Regards
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux