It's fixed now. Apparently we can not share a journal across different OSDs. I added a journal /dev/sdc1 (20GB) with my first OSD. I was trying to add the same journal with my second OSD and it was causing the issue. Then I added the secons OSD with a new journal and it worked fine. Thanks, Kapil. On Mon, 2014-07-28 at 10:16 +0300, Karan Singh wrote: > > > Looks like osd.1 has a valid auth ID , which was defined previously. > > > Trust this is your test cluster , try this > > > ceph osd crush rm osd.1 > ceph osd rm osd.1 > ceph auth del osd.1 > > > Once again try to add osd.1 using ceph-deploy ( prepare and then > activate commands ) , check the logs carefully for any other clues. > > > - Karan Singh - > > On 25 Jul 2014, at 12:49, Kapil Sharma <ksharma at suse.com> wrote: > > > Hi, > > > > I am using ceph-deploy to deploy my cluster. Whenever I try to add > > more > > than one osd in a node, except the first osd, all the other osds get > > a > > weight of 0, and they are in a state of down and out. > > > > So, if I have three nodes in my cluster, I can successfully add 1 > > node > > each in the three nodes, but the moment I try to add a second node > > in > > any of the nodes, it gets a weight of 0 and goes down and out. > > > > The capacity of all the disks is same. > > > > > > cephdeploy at node-1:~/cluster> ceph osd tree > > # id weight type name up/down reweight > > -1 1.82 root default > > -2 1.82 host node-1 > > 0 1.82 osd.0 up 1 > > 1 0 osd.1 down 0 > > > > There is no error as such after I run ceph-deploy activate command. > > > > Has anyone seen this issue before ? > > > > > > > > Kind Regards, > > Kapil. > > > > > > > > > > > > > > _______________________________________________ > > ceph-users mailing list > > ceph-users at lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >