Mounting cephfs from cluster ip ok but fails from external ip

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I currently have a test cluster running on aws. The vms are connected to each other using the internal ip addresses provided by amazon.

I can mount a cephfs by using the internal ip address from the mon0 node with this command:
sudo mount -t ceph 172.31.15.xxx:6789:/ /client -o name=admin,secretfile=/root/secret

However, if I try to mount on a client using the external ip of the mon0 node, the mount fails. Even if I try to mount on the mon0 node itself.
sudo mount -t ceph 52.28.87.xxx:6789:/ /client -o name=admin,secretfile=/root/secret

The mds log seems to point to some sort of keyring/auth issue, which I can’t wrap my mind around.
----- ceph-mds -i a -d -----
starting mds.a at :/0
2015-06-19 22:42:38.265025 7fcb8c4b6800  0 ceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3), process ceph-mds, pid 2518
2015-06-19 22:42:38.266442 7fcb8c4b6800 -1 asok(0x40ca000) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/ceph-mds.a.asok': (13) Permission denied
2015-06-19 22:42:38.266780 7fcb8c4b6800 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2015-06-19 22:42:38.266895 7fcb8c4b6800 -1 mds.-1.0 log_to_monitors {default=true}
2015-06-19 22:42:38.267534 7fcb8c4b6800 -1 mds.-1.0 ERROR: failed to authenticate: (95) Operation not supported
2015-06-19 22:42:38.267584 7fcb8c4b6800  1 mds.-1.0 suicide.  wanted down:dne, now up:boot
---------------------------

I have a vague feeling it might be related to the „public network“ entry in the ceph.conf. But how do I have to change the config to make it work?

I double checked firewall rules and ports, everything finde with the fw.

I’m really puzzled here, any pointers are welcome… thanks in advance!

Best
Chris

----- ceph.conf -----
[global]
fsid = ef013617-4582-4ab1-bd65-8b7bca638e44
mon_initial_members = mon0
mon_host = 172.31.15.xxx
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

public network = 172.31.0.0/16
mon_clock_drift_allowed = 1
mon_pg_warn_max_per_osd = 0

[osd]
filestore xattr use omap = true
osd pool default size = 3
osd pool default min size = 3
osd crush chooseleaf type = 1
----------------------

----- ceph -s -----
ceph@mon0:~$ ceph -s
    cluster ef013617-4582-4ab1-bd65-8b7bca638e44
     health HEALTH_OK
     monmap e3: 3 mons at {mon0=172.31.15.xx:6789/0,mon1=172.31.4.xxx:6789/0,mon2=172.31.15.xxx:6789/0}
            election epoch 36, quorum 0,1,2 mon1,mon0,mon2
     mdsmap e33: 1/1/1 up {0=osd2=up:active}, 2 up:standby
     osdmap e75: 3 osds: 3 up, 3 in
      pgmap v404: 384 pgs, 3 pools, 2000 MB data, 520 objects
            4122 MB used, 2079 GB / 2083 GB avail
                 384 active+clean
----------------------
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux