Re: RAM recommendation with large OSDs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It’s not that the limit is *ignored*; sometimes the failure of the subtree isn’t *detected*.  Eg., I’ve seen this happen when a node experienced kernel weirdness or OOM conditions such that the OSDs didn’t all get marked down at the same time, so the PGs all started recovering.  Admitedly it’s been a while since I’ve seen this, my sense is that with Luminous the detection became a *lot* better.



> On Oct 3, 2019, at 9:55 AM, Darrell Enns <darrelle@xxxxxxxxxxxx> wrote:
> 
> Thanks for the reply Anthony. 
> 
> Those are all considerations I am very much aware of. I'm very curious about this though:
> 
>> mon_osd_down_out_subtree_limit.  There are cases where it doesn’t kick in and a whole node will attempt to rebalance
> 
> In what cases is the limit ignored? Do these exceptions also apply to mon_osd_min_in_ratio? Is this in the docs somewhere?
> 
[ good Cephers trim their quoted text ]
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux