Re: OSD suicide after being down/in for one day as it needs to search large amount of objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Greg.
On Aug 20, 2014, at 6:09 AM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:

> On Mon, Aug 18, 2014 at 11:30 PM, Guang Yang <yguang11@xxxxxxxxxxx> wrote:
>> Hi ceph-devel,
>> David (cc’ed) reported a bug (http://tracker.ceph.com/issues/9128) which we came across in our test cluster during our failure testing, basically the way to reproduce it was to leave one OSD daemon down and in for a day, at the same time, keep giving write traffic. When the OSD daemon was started again, it hit suicide timeout and kill itself.
>> 
>> After some analysis (details in the bug), David found that the op thread was busy searching for missing objects and once the volume to search increase, the thread is expected to work that long time, please refer to the bug for detailed logs.
> 
> Can you talk a little more about what's going on here? At a quick
> naive glance, I'm not seeing why leaving an OSD down and in should
> require work based on the amount of write traffic. Perhaps if the rest
> of the cluster was changing mappings…?
We increased the down to out time interval from 5 minutes to 2 days to avoid migrating data back and forth which could increase latency, so that we target to mark OSD out manually. To achieve such, we are testing against some boundary cases to let the OSD down and in for like 1 day, however, when we try to bring it up again, it always failed due to hit the suicide timeout.
> 
>> 
>> One simple fix is to let the op thread reset the suicide timeout periodically when it is doing long-time work, other fix might be to cut the work into smaller pieces?
> 
> We do both of those things throughout the OSD (although I think the
> first is simpler and more common); search for the accesses to
> cct->get_heartbeat_map()->reset_timeout.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> 

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux