Re: osd crashed after adding new osd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 11 Aug 2010, Henry C Chang wrote:
> After adding osd3, only osd2 is alive.
> But the whole cluster can be recovered after restarting all dead osds.
> 
> I read the wiki and reversed the order of my procedure: starting osd
> before setting crush map.
> It seems to work ok now.

Hmm, the order shouldn't matter--there's definitely something going wrong.  
Are you able to reproduce the crash with the old order?  (With logs?  :)

sage


> 
> Thanks,
> Henry
> 
> 
> On Wed, Aug 11, 2010 at 4:56 AM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
> > Hi Henry,
> >
> > Your osd add procedure looks correct.  Did the osd start correct after
> > this point?  Are you able to reproduce the problem with osd logging
> > turned up (debug osd = 20 in [osd])?
> >
> > I just tried this with the latest unstable and wasn't able to reproduce
> > the problem.
> >
> > sage
> >
> > On Wed, 11 Aug 2010, Henry C Chang wrote:
> >
> >> Hi Wido,
> >>
> >> I am running unstable branch (commit: b72c1bb6e9b77e1ab6c2) dated July 30.
> >> I just uploaded the core dumps to
> >>
> >> http://home.anet.net.tw/cycbbb/coredumps/core.2580.gz
> >> http://home.anet.net.tw/cycbbb/coredumps/core.2602.gz
> >>
> >>
> >>
> >> On Tue, Aug 10, 2010 at 11:42 PM, Wido den Hollander <wido@xxxxxxxxx> wrote:
> >> > Hi Henry,
> >> >
> >> > Is there a core-dump of these crashes in /? If so, these could help
> >> > finding the cause of this.
> >> >
> >> > See: http://ceph.newdream.net/wiki/Troubleshooting
> >> >
> >> > And btw, which version of Ceph are you running?
> >> >
> >> > Wido
> >> >
> >> > On Tue, 2010-08-10 at 21:09 +0800, Henry C Chang wrote:
> >> >> Hi,
> >> >>
> >> >> I have a ceph cluster: 3 (mon+osd) and 2 (mds).
> >> >> When I tried to add the 4th osd to the cluster, osd0 and osd1 crashed.
> >> >> The error logs are attached.
> >> >>
> >> >> My procedure to add the 4th osd is:
> >> >>
> >> >> add [osd3] in the conf file: /etc/ceph/ceph.conf
> >> >> ceph -c /etc/ceph/ceph.conf mon getmap -o /tmp/monmap
> >> >> cosd -c /etc/ceph/ceph.conf -i 3 --mkfs --monmap /tmp/monmap
> >> >> ceph -c /etc/ceph/ceph.conf osd setmaxosd 4
> >> >> osdmaptool --createsimple 4 --clobber /tmp/osdmap.junk --export-crush
> >> >> /tmp/crush.new
> >> >> ceph -c /etc/ceph.conf osd setcrushmap -i /tmp/crush.new
> >> >> /etc/init.d/ceph -c
> >> >> /etc/ceph/d2c5d946-b888-40b3-aac2-adda05477a81.conf start osd
> >> >>
> >> >> Is my procedure to add an osd incorrect?
> >> >>
> >> >> Thanks,
> >> >> Henry
> >> >
> >> >
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>
> >>
> >
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux