Re: Mon losing touch with OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



G'day Sage,

On Thu, Feb 14, 2013 at 08:57:11PM -0800, Sage Weil wrote:
> On Fri, 15 Feb 2013, Chris Dunlop wrote:
>> In an otherwise seemingly healthy cluster (ceph 0.56.2), what might cause the
>> mons to lose touch with the osds?
> 
> Can you enable 'debug ms = 1' on the mons and leave them that way, in the 
> hopes that this happens again?  It will give us more information to go on.

Debug turned on.

>> Perhaps the mon lost osd.1 because it was too slow, but that hadn't happened in
>> any of the many previous "slow requests" intances, and the timing doesn't look
>> quite right: the mon complains it hasn't heard from osd.0 since 20:11:19, but
>> the osd.0 log shows nothing problems at all, then the mon complains about not
>> having heard from osd.1 since 20:11:21, whereas the first indication of trouble
>> on osd.1 was the request from 20:26:20 not being processed in a timely fashion.
> 
> My guess is the above was a side-effect of osd.0 being marked out.   On 
> 0.56.2 there is some strange peering workqueue laggyness that could 
> potentially contribute as well.  I recommend moving to 0.56.3.

Upgraded to 0.56.3.

>> Trying to manually set the osds in (e.g. ceph osd in 0) didn't help, nor did
>> restarting the osds ('service ceph restart osd' on each osd host).
>> 
>> The immediate issue was resolved by restarting ceph completely on one of the
>> mon/osd hosts (service ceph restart). Possibly a restart of just the mon would
>> have been sufficient.
> 
> Did you notice that the osds you restarted didn't immediately mark 
> themselves in?  Again, it could be explained by the peering wq issue, 
> especially if there are pools in your cluster that are not getting any IO.

Sorry, no. I was kicking myself later for losing the 'ceph -s' output 
when I killed that terminal session but in the heat of the moment...

I can't see anything about osd marking themselves in from the logs from the
time (with no debugging), but I'm on my ipad at the moment so I could easily
have missed it. Should that info be in the logs somewhere?

There's certainly unused pools: we're only using the rbd pool and so the
default data and metadata pools are unused.

Thanks for your attention!

Cheers,

Chris
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux