On Wed, Sep 23, 2015 at 7:10 PM, Fulin Sun <sunfl@xxxxxxxxxxxxxxxx> wrote: > I changed the source code and comment the namenode schema check in > NameNode.java > > then I rebuild hadoop 2.7.1 with maven and replace the hadoop-hdfs.jar > > When I am trying to restart hadoop, namenode still failed with the following > exception : > > Does this mean that I cannot define the ceph host:port ? Really confused > about this. > > Please offer some guide here. The NameNode is part of HDFS but not of the Hadoop map-reduce engine. You don't need it if you're using an alternative filesystem like CephFS instead of HDFS. I'm not sure what led you to try and turn it on. :) -Greg > > > java.net.BindException: Problem binding to [172.16.50.18:6789] > java.net.BindException: Address already in use; For more details see: > http://wiki.apache.org/hadoop/BindException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:526) > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) > at org.apache.hadoop.ipc.Server.bind(Server.java:425) > at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574) > at org.apache.hadoop.ipc.Server.<init>(Server.java:2215) > at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:938) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534) > at > org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509) > at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:783) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:344) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:673) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:646) > at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) > at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) > at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) > Caused by: java.net.BindException: Address already in use > at sun.nio.ch.Net.bind0(Native Method) > at sun.nio.ch.Net.bind(Net.java:463) > at sun.nio.ch.Net.bind(Net.java:455) > at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) > at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) > at org.apache.hadoop.ipc.Server.bind(Server.java:408) > ... 13 more > > ________________________________ > ________________________________ > > > > 发件人: Fulin Sun > 发送时间: 2015-09-23 14:59 > 收件人: ceph-users > 主题: About cephfs with hadoop > Hi, all > I am trying to use cephfs as a replacement drop-in for hadoop hdfs. Maily > configure steps according to doc here : > http://docs.ceph.com/docs/master/cephfs/hadoop/ > > I am using a 3 node hadoop 2.7.1 cluster. Noting the official doc recommend > using 1.1.x stable release, I am not sure > if using 2.x release would cause some unexpected trouble. Anyway, I think I > had configured right for cephfs and ceph > storage cluster goes well. > > When I am trying to start the hadoop cluster, namenode failed with the > following exception: > 2015-09-23 14:47:39,600 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Invalid URI for NameNode address > (check fs.defaultFS): ceph://172.16.50.18:6789/ is not of scheme 'hdfs'. > at > org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:477) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:461) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:512) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:612) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:632) > at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811) > at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488) > at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554) > > > Quite weird here cause the Namenode check seems to NOT ALLOW any schema > expect default hdfs. Then how would cephfs-hadoop work actually? > > Really need your help here. > Best, > Sun. > > ________________________________ > ________________________________ > > CertusNet > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com