Hi Noah, I enabled the debugging and got: 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication 2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache size 0 max 0 2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit ret -2 I have the ceph.client.admin.keyring file in /etc/ceph and I tried with and without the parameter in core-site.xml. Unfortunately without success:( Thanks, Rolando <property> <name>fs.ceph.impl</name> <value>org.apache.hadoop.fs.ceph.CephFileSystem</value> </property> <property> <name>fs.default.name</name> <value>ceph://hyrax1:6789/</value> </property> <property> <name>ceph.conf.file</name> <value>/hyrax/hadoop-ceph/ceph/ceph.conf</value> </property> <property> <name>ceph.root.dir</name> <value>/</value> </property> <property> <name>ceph.auth.keyring</name> <value>/hyrax/hadoop-ceph/ceph/ceph.client.admin.keyring</value> </property> On Mon, Sep 23, 2013 at 2:24 PM, Noah Watkins <noah.watkins@xxxxxxxxxxx> wrote: > Shoot, I thought I had it figured out :) > > There is a default admin user created when you first create your > cluster. After a typical install via ceph-deploy, there should be a > file called 'ceph.client.admin.keyring', usually sibling to ceph.conf. > If this is in a standard location (e.g. /etc/ceph) you shouldn't need > the keyring option, otherwise point 'ceph.auth.keyring' at that > keyring file. You shouldn't need both the keyring and the keyfile > options set, but it just depends on how your authentication / users > are all setup. > > The easiest thing to do if that doesn't solve your problem is probably > to turn on logging so we can see what is blowing up. > > In your ceph.conf you can add 'debug client = 20' and 'debug > javaclient = 20' to the client section. You may also need to set the > log file 'log file = /path/...'. You don't need to do this on all your > nodes, just one node where you get the failure. > > - Noah > >> Thanks, >> Rolando >> >> P.S.: I have the cephFS mounted locally, so the cluster is ok. >> >> cluster d9ca74d0-d9f4-436d-92de-762af67c6534 >> health HEALTH_OK >> monmap e1: 9 mons at >> {hyrax1=10.10.10.10:6789/0,hyrax2=10.10.10.12:6789/0,hyrax3=10.10.10.15:6789/0,hyrax4=10.10.10.13:6789/0,hyrax5=10.10.10.16:6789/0,hyrax6=10.10.10.14:6789/0,hyrax7=10.10.10.18:6789/0,hyrax8=10.10.10.17:6789/0,hyrax9=10.10.10.11:6789/0}, >> election epoch 6, quorum 0,1,2,3,4,5,6,7,8 >> hyrax1,hyrax2,hyrax3,hyrax4,hyrax5,hyrax6,hyrax7,hyrax8,hyrax9 >> osdmap e30: 9 osds: 9 up, 9 in >> pgmap v2457: 192 pgs: 192 active+clean; 10408 bytes data, 44312 MB >> used, 168 GB / 221 GB avail >> mdsmap e4: 1/1/1 up {0=hyrax1=up:active} >> >> >> <property> >> <name>fs.ceph.impl</name> >> <value>org.apache.hadoop.fs.ceph.CephFileSystem</value> >> </property> >> >> <property> >> <name>fs.default.name</name> >> <value>ceph://hyrax1:6789/</value> >> </property> >> >> <property> >> <name>ceph.conf.file</name> >> <value>/hyrax/hadoop-ceph/ceph/ceph.conf</value> >> </property> >> >> <property> >> <name>ceph.root.dir</name> >> <value>/</value> >> </property> >> >> <property> >> <name>ceph.auth.keyfile</name> >> <value>/hyrax/hadoop-ceph/ceph/admin.secret</value> >> </property> >> >> <property> >> <name>ceph.auth.keyring</name> >> <value>/hyrax/hadoop-ceph/ceph/ceph.mon.keyring</value> >> </property> >> >> On Mon, Sep 23, 2013 at 11:42 AM, Noah Watkins <noah.watkins@xxxxxxxxxxx> wrote: >>>> <property> >>>> <name>ceph.root.dir</name> >>>> <value>/mnt/mycephfs</value> >>>> </property> >>> >>> This is probably causing the issue. Is this meant to be a local mount >>> point? The 'ceph.root.dir' property specifies the root directory >>> /inside/ CephFS, and the Hadoop implementation doesn't require a local >>> CephFS mount--it uses a client library to interact with the file >>> system. >>> >>> The default value for this property is "/", so you can probably just >>> remove this from your config file unless your CephFS directory >>> structure is carved up in a special way. >>> >>>> <property> >>>> <name>ceph.conf.file</name> >>>> <value>/hyrax/hadoop-ceph/ceph/ceph.conf</value> >>>> </property> >>>> <property> >>>> <name>ceph.auth.keyfile</name> >>>> <value>/hyrax/hadoop-ceph/ceph/admin.secret</value> >>>> </property> >>>> >>>> <property> >>>> <name>ceph.auth.keyring</name> >>>> <value>/hyrax/hadoop-ceph/ceph/ceph.mon.keyring</value> >>>> </property> >>> >>> These files will need to be available locally on every node Hadoop >>> runs on. I think the error below will occur after these are loaded, so >>> it probably isn't your issue, though I don't recall exactly at which >>> point different configuration files are loaded. >>> >>>> <property> >>>> <name>fs.hdfs.impl</name> >>>> <value>org.apache.hadoop.fs.ceph.CephFileSystem</value> >>>> </property> >>> >>> I don't think this is part of the problem you are seeing, but this >>> 'fs.hdfs.impl' property should probably be removed. We aren't >>> overriding HDFS, just replacing it. >>> >>>> <property> >>>> <name>ceph.mon.address</name> >>>> <value>hyrax1:6789</value> >>>> </property> >>> >>> This was already specified in your 'fs.default.name' property. I don't >>> think that duplicating it is an issue, but I should probably update >>> the documentation to make it clear that the monitor only needs to be >>> listed once. >>> >>> Thanks! >>> Noah _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com