Hello,
Good to hear it's not just me, however have a cluster basically offline due to too many OSD's dropping for this issue.
Anybody have any suggestions?
,Ashley From: Eric Nelson <ericnelson@xxxxxxxxx>
Sent: 16 November 2017 00:06:14 To: Ashley Merrick Cc: ceph-users@xxxxxxxx Subject: Re: OSD Random Failures - Latest Luminous I've been seeing these as well on our SSD cachetier that's been ravaged by disk failures as of late.... Same tp_peering assert as above even running luminous branch from git.
Let me know if you have a bug filed I can +1 or have found a workaround.
E
On Wed, Nov 15, 2017 at 10:25 AM, Ashley Merrick
<ashley@xxxxxxxxxxxxxx> wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com