Re: deploying Ceph using FQDN for MON / MDS Services

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday, January 24, 2023 9:02:41 AM EST Lokendra Rathour wrote:
> Hi Team,
> 
> 
> 
> We have a ceph cluster with 3 storage nodes:
> 
> 1. storagenode1 - abcd:abcd:abcd::21
> 
> 2. storagenode2 - abcd:abcd:abcd::22
> 
> 3. storagenode3 - abcd:abcd:abcd::23
> 
> 
> 
> The requirement is to mount ceph using the domain name of MON node:
> 
> Note: we resolved the domain name via DNS server.
> 
> 
> For this we are using the command:
> 
> ```
> 
> mount -t ceph [storagenode.storage.com]:6789:/  /backup -o
> name=admin,secret=AQCM+8hjqzuZEhAAcuQc+onNKReq7MV+ykFirg==
> 
> ```
> 
> 
> 
> We are getting the following logs in /var/log/messages:
> 
> ```
> 
> Jan 24 17:23:17 localhost kernel: libceph: resolve 'storagenode.storage.com'
> (ret=-3): failed
> 
> Jan 24 17:23:17 localhost kernel: libceph: parse_ips bad ip '
> storagenode.storage.com:6789'
> 
> ```
> 


I saw a similar log message recently when I had forgotten to install the ceph 
mount helper. 
Check to see if you have a binary 'mount.ceph' on the system. If you don't try 
to install it from packages. On fedora I needed to install 'ceph-common'.


> 
> 
> We also tried mounting ceph storage using IP of MON which is working fine.
> 
> 
> 
> Query:
> 
> 
> Could you please help us out with how we can mount ceph using FQDN.
> 
> 
> 
> My /etc/ceph/ceph.conf is as follows:
> 
> [global]
> 
> ms bind ipv6 = true
> 
> ms bind ipv4 = false
> 
> mon initial members = storagenode1,storagenode2,storagenode3
> 
> osd pool default crush rule = -1
> 
> fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
> 
> mon host =
> [v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:a
> bcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]:3300,v1
> :[abcd:abcd:abcd::23]:6789]
> 
> public network = abcd:abcd:abcd::/64
> 
> cluster network = eff0:eff0:eff0::/64
> 
> 
> 
> [osd]
> 
> osd memory target = 4294967296
> 
> 
> 
> [client.rgw.storagenode1.rgw0]
> 
> host = storagenode1
> 
> keyring = /var/lib/ceph/radosgw/ceph-rgw.storagenode1.rgw0/keyring
> 
> log file = /var/log/ceph/ceph-rgw-storagenode1.rgw0.log
> 
> rgw frontends = beast endpoint=[abcd:abcd:abcd::21]:8080
> 
> rgw thread pool size = 512



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux