Re: guidance with hadoop on ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Varun,

Try removing this configuration option:

> <property>
>   <name>ceph.root.dir</name>
>   <value>/mnt/ceph</value>
> </property>

Hadoop running on Ceph uses the libcephfs user-space library to talk directly to the file system, as opposed to running through the kernel or FUSE client. This setting is which directory within the Ceph file system to use as a root. It defaults to "/".

Thanks,
-Noah


On Mar 19, 2013, at 7:04 AM, Varun Chandramouli <varun.c37@xxxxxxxxx> wrote:

> Hi,
> 
> Sorry for bringing this thread up again. After building hadoop and ceph, I am not able to run the wordcount example. I am getting the following error:
> 
> varunc@varunc4-virtual-machine:/usr/local/hadoop$ time /usr/local/hadoop/bin/hadoop jar hadoop*examples*.jar wordcount /mnt/ceph/wc2 /mnt/ceph/op
> Warning: $HADOOP_HOME is deprecated.
> 
> java.io.FileNotFoundException:
>         at com.ceph.fs.CephMount.native_ceph_mount(Native Method)
>         at com.ceph.fs.CephMount.mount(CephMount.java:152)
>         at org.apache.hadoop.fs.ceph.CephTalker.initialize(CephTalker.java:103)
>         at org.apache.hadoop.fs.ceph.CephFileSystem.initialize(CephFileSystem.java:98)
>         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:238)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
>         at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(FileInputFormat.java:372)
>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:65)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>         at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>         at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> 
> I have mounted ceph fs on /mnt/ceph, and have a 2 node cluster. Following are my *-site.xml files:
> Core-site.xml (I have tried quite a few permutations of these properties):
> 
> <configuration>
> <property>
>   <name>hadoop.tmp.dir</name>
>   <value>/app/hadoop/tmp</value>
>   <description>A base for other temporary directories.</description>
> </property>
> 
> <property>
>   <name>fs.default.name</name>
>   <value>ceph:///</value>
>   <description>The name of the default file system.  A URI whose
>   scheme and authority determine the FileSystem implementation.  The
>   uri's scheme determines the config property (fs.SCHEME.impl) naming
>   the FileSystem implementation class.  The uri's authority is used to
>   determine the host, port, etc. for a filesystem.</description>
> </property>
> 
> <property>
>   <name>ceph.conf.file</name>
>   <value>/etc/ceph/ceph.conf</value>
> </property>
> 
> <property>
>   <name>ceph.root.dir</name>
>   <value>/mnt/ceph</value>
> </property>
> </configuration>
> 
> mapred-site.xml:
> 
> <configuration>
> <property>
>   <name>mapred.job.tracker</name>
>   <value>varunc4-virtual-machine:54311</value>
>   <description>The host and port that the MapReduce job tracker runs
>   at.  If "local", then jobs are run in-process as a single map
>   and reduce task.
>   </description>
> <!--  <name>mapred.map.tasks</name>
>   <value>1</value>-->
> </property>
> </configuration>
> 
> I hope there is no need to make any changes to the hdfs-site.xml.
> 
> Can someone please tell me what is wrong and how do I get it working?
> 
> Thanks
> Varun
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux