Re: Hadoop and Ceph integration issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2013-09-23 19:42:22.515836 7f0b58de7700 10 jni: conf_read_file: exit ret 0
2013-09-23 19:42:22.515893 7f0b58de7700 10 jni: ceph_mount: /
2013-09-23 19:42:22.516643 7f0b58de7700 -1 monclient(hunting): ERROR:
missing keyring, cannot use cephx for authentication
2013-09-23 19:42:22.516969 7f0b58de7700 20 client.-1 trim_cache size 0 max 0
2013-09-23 19:42:22.517210 7f0b58de7700 10 jni: ceph_mount: exit ret -2
2013-09-23 19:42:23.520569 7f0b58de7700 10 jni: conf_read_file: exit ret 0
2013-09-23 19:42:23.520601 7f0b58de7700 10 jni: ceph_mount: /
....


On Mon, Sep 23, 2013 at 3:47 PM, Noah Watkins <noah.watkins@xxxxxxxxxxx> wrote:
> In the log file that you showing, do you see where the keyring file is
> being set by Hadoop? You can find it by grepping for: "jni: conf_set"
>
> On Mon, Sep 23, 2013 at 12:43 PM, Rolando Martins
> <rolando.martins@xxxxxxxxx> wrote:
>> bin/hadoop fs -ls
>>
>> Bad connection to FS. command aborted. exception:
>>
>> (no other information is thrown)
>>
>> ceph log:
>> 2013-09-23 19:42:27.545402 7f0b58de7700 -1 monclient(hunting): ERROR:
>> missing keyring, cannot use cephx for authentication
>> 2013-09-23 19:42:27.545619 7f0b58de7700 20 client.-1 trim_cache size 0 max 0
>> 2013-09-23 19:42:27.545733 7f0b58de7700 10 jni: ceph_mount: exit ret -2
>>
>> On Mon, Sep 23, 2013 at 3:39 PM, Noah Watkins <noah.watkins@xxxxxxxxxxx> wrote:
>>> What happens when you run `bin/hadoop fs -ls` ? This is entirely
>>> local, and a bit simpler and easier to grok.
>>>
>>> On Mon, Sep 23, 2013 at 12:23 PM, Rolando Martins
>>> <rolando.martins@xxxxxxxxx> wrote:
>>>> I am trying to start hadoop using bin/start-mapred.sh.
>>>> In the HADOOP_HOME/lib, I have:
>>>> lib/hadoop-cephfs.jar  lib/libcephfs.jar  lib/libcephfs_jni.so
>>>> (the first I downloaded from
>>>> http://ceph.com/docs/master/cephfs/hadoop/ and the other two, I copied
>>>> from my system (after installing the ubuntu package for the ceph java
>>>> client))
>>>>
>>>> I added to conf/hadoop-env.sh:
>>>> export LD_LIBRARY_PATH=/hyrax/hadoop-ceph/lib
>>>>
>>>> I confirmed using bin/hadoop classpath that both jar are in the classpath.
>>>>
>>>> On Mon, Sep 23, 2013 at 3:17 PM, Noah Watkins <noah.watkins@xxxxxxxxxxx> wrote:
>>>>> How are you invoking Hadoop? Also, I forgot to ask, are you using the
>>>>> wrappers located in github.com/ceph/hadoop-common (or the jar linked
>>>>> to on http://ceph.com/docs/master/cephfs/hadoop/)?
>>>>>
>>>>> On Mon, Sep 23, 2013 at 12:05 PM, Rolando Martins
>>>>> <rolando.martins@xxxxxxxxx> wrote:
>>>>>> Hi Noah,
>>>>>> I enabled the debugging and got:
>>>>>>
>>>>>> 2013-09-23 18:59:34.705894 7f0b58de7700 -1 monclient(hunting): ERROR:
>>>>>> missing keyring, cannot use cephx for authentication
>>>>>> 2013-09-23 18:59:34.706106 7f0b58de7700 20 client.-1 trim_cache size 0 max 0
>>>>>> 2013-09-23 18:59:34.706225 7f0b58de7700 10 jni: ceph_mount: exit ret -2
>>>>>>
>>>>>> I have the ceph.client.admin.keyring file in /etc/ceph and I tried
>>>>>> with and without the
>>>>>> parameter in core-site.xml. Unfortunately without success:(
>>>>>>
>>>>>> Thanks,
>>>>>> Rolando
>>>>>>
>>>>>>
>>>>>> <property>
>>>>>>         <name>fs.ceph.impl</name>
>>>>>>         <value>org.apache.hadoop.fs.ceph.CephFileSystem</value>
>>>>>> </property>
>>>>>>
>>>>>> <property>
>>>>>>         <name>fs.default.name</name>
>>>>>>         <value>ceph://hyrax1:6789/</value>
>>>>>> </property>
>>>>>>
>>>>>> <property>
>>>>>>         <name>ceph.conf.file</name>
>>>>>>         <value>/hyrax/hadoop-ceph/ceph/ceph.conf</value>
>>>>>> </property>
>>>>>>
>>>>>> <property>
>>>>>>         <name>ceph.root.dir</name>
>>>>>>         <value>/</value>
>>>>>> </property>
>>>>>>  <property>
>>>>>>     <name>ceph.auth.keyring</name>
>>>>>>    <value>/hyrax/hadoop-ceph/ceph/ceph.client.admin.keyring</value>
>>>>>> </property>
>>>>>>
>>>>>> On Mon, Sep 23, 2013 at 2:24 PM, Noah Watkins <noah.watkins@xxxxxxxxxxx> wrote:
>>>>>>> Shoot, I thought I had it figured out :)
>>>>>>>
>>>>>>> There is a default admin user created when you first create your
>>>>>>> cluster. After a typical install via ceph-deploy, there should be a
>>>>>>> file called 'ceph.client.admin.keyring', usually sibling to ceph.conf.
>>>>>>> If this is in a standard location (e.g. /etc/ceph) you shouldn't need
>>>>>>> the keyring option, otherwise point 'ceph.auth.keyring' at that
>>>>>>> keyring file. You shouldn't need both the keyring and the keyfile
>>>>>>> options set, but it just depends on how your authentication / users
>>>>>>> are all setup.
>>>>>>>
>>>>>>> The easiest thing to do if that doesn't solve your problem is probably
>>>>>>> to turn on logging so we can see what is blowing up.
>>>>>>>
>>>>>>> In your ceph.conf you can add 'debug client = 20' and 'debug
>>>>>>> javaclient = 20' to the client section. You may also need to set the
>>>>>>> log file 'log file = /path/...'. You don't need to do this on all your
>>>>>>> nodes, just one node where you get the failure.
>>>>>>>
>>>>>>> - Noah
>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Rolando
>>>>>>>>
>>>>>>>> P.S.: I have the cephFS mounted locally, so the cluster is ok.
>>>>>>>>
>>>>>>>> cluster d9ca74d0-d9f4-436d-92de-762af67c6534
>>>>>>>>    health HEALTH_OK
>>>>>>>>    monmap e1: 9 mons at
>>>>>>>> {hyrax1=10.10.10.10:6789/0,hyrax2=10.10.10.12:6789/0,hyrax3=10.10.10.15:6789/0,hyrax4=10.10.10.13:6789/0,hyrax5=10.10.10.16:6789/0,hyrax6=10.10.10.14:6789/0,hyrax7=10.10.10.18:6789/0,hyrax8=10.10.10.17:6789/0,hyrax9=10.10.10.11:6789/0},
>>>>>>>> election epoch 6, quorum 0,1,2,3,4,5,6,7,8
>>>>>>>> hyrax1,hyrax2,hyrax3,hyrax4,hyrax5,hyrax6,hyrax7,hyrax8,hyrax9
>>>>>>>>    osdmap e30: 9 osds: 9 up, 9 in
>>>>>>>>     pgmap v2457: 192 pgs: 192 active+clean; 10408 bytes data, 44312 MB
>>>>>>>> used, 168 GB / 221 GB avail
>>>>>>>>    mdsmap e4: 1/1/1 up {0=hyrax1=up:active}
>>>>>>>>
>>>>>>>>
>>>>>>>> <property>
>>>>>>>> <name>fs.ceph.impl</name>
>>>>>>>> <value>org.apache.hadoop.fs.ceph.CephFileSystem</value>
>>>>>>>> </property>
>>>>>>>>
>>>>>>>> <property>
>>>>>>>> <name>fs.default.name</name>
>>>>>>>> <value>ceph://hyrax1:6789/</value>
>>>>>>>> </property>
>>>>>>>>
>>>>>>>> <property>
>>>>>>>> <name>ceph.conf.file</name>
>>>>>>>> <value>/hyrax/hadoop-ceph/ceph/ceph.conf</value>
>>>>>>>> </property>
>>>>>>>>
>>>>>>>> <property>
>>>>>>>> <name>ceph.root.dir</name>
>>>>>>>> <value>/</value>
>>>>>>>> </property>
>>>>>>>>
>>>>>>>> <property>
>>>>>>>> <name>ceph.auth.keyfile</name>
>>>>>>>> <value>/hyrax/hadoop-ceph/ceph/admin.secret</value>
>>>>>>>> </property>
>>>>>>>>
>>>>>>>> <property>
>>>>>>>> <name>ceph.auth.keyring</name>
>>>>>>>> <value>/hyrax/hadoop-ceph/ceph/ceph.mon.keyring</value>
>>>>>>>> </property>
>>>>>>>>
>>>>>>>> On Mon, Sep 23, 2013 at 11:42 AM, Noah Watkins <noah.watkins@xxxxxxxxxxx> wrote:
>>>>>>>>>> <property>
>>>>>>>>>>         <name>ceph.root.dir</name>
>>>>>>>>>>         <value>/mnt/mycephfs</value>
>>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> This is probably causing the issue. Is this meant to be a local mount
>>>>>>>>> point? The 'ceph.root.dir' property specifies the root directory
>>>>>>>>> /inside/ CephFS, and the Hadoop implementation doesn't require a local
>>>>>>>>> CephFS mount--it uses a client library to interact with the file
>>>>>>>>> system.
>>>>>>>>>
>>>>>>>>> The default value for this property is "/", so you can probably just
>>>>>>>>> remove this from your config file unless your CephFS directory
>>>>>>>>> structure is carved up in a special way.
>>>>>>>>>
>>>>>>>>>> <property>
>>>>>>>>>>         <name>ceph.conf.file</name>
>>>>>>>>>>         <value>/hyrax/hadoop-ceph/ceph/ceph.conf</value>
>>>>>>>>>> </property>
>>>>>>>>>> <property>
>>>>>>>>>>         <name>ceph.auth.keyfile</name>
>>>>>>>>>>         <value>/hyrax/hadoop-ceph/ceph/admin.secret</value>
>>>>>>>>>> </property>
>>>>>>>>>>
>>>>>>>>>> <property>
>>>>>>>>>>         <name>ceph.auth.keyring</name>
>>>>>>>>>>         <value>/hyrax/hadoop-ceph/ceph/ceph.mon.keyring</value>
>>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> These files will need to be available locally on every node Hadoop
>>>>>>>>> runs on. I think the error below will occur after these are loaded, so
>>>>>>>>> it probably isn't your issue, though I don't recall exactly at which
>>>>>>>>> point different configuration files are loaded.
>>>>>>>>>
>>>>>>>>>> <property>
>>>>>>>>>>         <name>fs.hdfs.impl</name>
>>>>>>>>>>         <value>org.apache.hadoop.fs.ceph.CephFileSystem</value>
>>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> I don't think this is part of the problem you are seeing, but this
>>>>>>>>> 'fs.hdfs.impl' property should probably be removed. We aren't
>>>>>>>>> overriding HDFS, just replacing it.
>>>>>>>>>
>>>>>>>>>> <property>
>>>>>>>>>>         <name>ceph.mon.address</name>
>>>>>>>>>>         <value>hyrax1:6789</value>
>>>>>>>>>> </property>
>>>>>>>>>
>>>>>>>>> This was already specified in your 'fs.default.name' property. I don't
>>>>>>>>> think that duplicating it is an issue, but I should probably update
>>>>>>>>> the documentation to make it clear that the monitor only needs to be
>>>>>>>>> listed once.
>>>>>>>>>
>>>>>>>>> Thanks!
>>>>>>>>> Noah
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux