Re: mon_osd_down_out_subtree_limit stuck at "rack"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



El Miércoles 22/10/2014, Christian Balzer escribió:
> Hello,
>
> On Wed, 22 Oct 2014 17:41:45 -0300 Ricardo J. Barberis wrote:
> > El Martes 21/10/2014, Christian Balzer escribió:
> > > Hello,
> > >
> > > I'm trying to change the value of mon_osd_down_out_subtree_limit from
> > > rack to something, anything else with ceph 0.80.(6|7).
> > >
> > > Using injectargs it tells me that this isn't a runtime supported change
> > > and changing it in the config file (global section) to either host or
> > > room has no effect.
> > >
> > > Christian
> >
> > I had a similar problem with 0.80.7 and "mon osd downout subtree limit =
> > host" till I realized it's actually "mon osd down out subtree limit =
> > host" (notice the space between "down" and "out").
>
> That's exactly what it was, thanks.
>
> And while I feel moderately stupid for not spotting this difference
> between the config file and what the active configuration displays I
> really, REALLY would love for Ceph to log anything it finds wrong with
> its config during startup. Given how massively chatty Ceph is otherwise
> that would be a most welcome and USEFUL addition to the log deluge.

Yep, that'd be nice.

> > After putting that option in ceph.conf and restarting mons and osds I
> > can see the change, haven't tested it yet though.
>
> Yeah, I shall go and test this now. ^^

I tested it earlier today: a disk died so I had 1 osd down and recovering, I 
shutdown the others osds on the same host and the recovery for the host 
didn't start.

One oddity I see is that after I started the host again without the faulty 
disk and the recovery finished, ceph says HEALTH_OK but I still see one osd 
out/down: "53 osds: 52 up, 52 in".

> > (Christian: If this is not your case, sorry for hijacking your thread!)
> >
> >
> > Anyway, the docs should to be corrected on this point:
> >
> > http://ceph.com/docs/master/rados/configuration/mon-osd-interaction/
>
> Indeed, that's where I cut and pasted it from as well.

I asked on IRC how to go about this, I'll submit a patch tomorrow.

> Thanks again,
>
> Christian
>
> > Cheers,
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux