> -----Original Message----- > From: Adrian Saul [mailto:Adrian.Saul@xxxxxxxxxxxxxxxxx] > Sent: 19 June 2017 06:54 > To: nick@xxxxxxxxxx; 'Alex Gorbachev' <ag@xxxxxxxxxxxxxxxxxxx> > Cc: 'ceph-users' <ceph-users@xxxxxxxxxxxxxx> > Subject: RE: VMware + CEPH Integration > > > Hi Alex, > > > > Have you experienced any problems with timeouts in the monitor action > > in pacemaker? Although largely stable, every now and again in our > > cluster the FS and Exportfs resources timeout in pacemaker. There's no > > mention of any slow requests or any peering..etc from the ceph logs so it's > a bit of a mystery. > > Yes - we have that in our setup which is very similar. Usually I find it related > to RBD device latency due to scrubbing or similar but even when tuning > some of that down we still get it randomly. > > The most annoying part is that once it comes up, having to use "resource > cleanup" to try and remove the failed usually has more impact than the > actual error. Are you using Stonith? Pacemaker should be able to recover from any sort of failure as long as it can bring the cluster into a known state. I'm still struggling to get to the bottom of it in our environment. When it happens, every RBD on the same client host seems to hang, but all other hosts are fine. This seems to suggest it's not a Ceph cluster issue/performance, as this would affect the majority of RBD's and not just ones on a single client. > Confidentiality: This email and any attachments are confidential and may be > subject to copyright, legal or some other professional privilege. They are > intended solely for the attention and use of the named addressee(s). They > may only be copied, distributed or disclosed with the consent of the > copyright owner. If you have received this email by mistake or by breach of > the confidentiality clause, please notify the sender immediately by return > email and delete or destroy all copies of the email. Any confidentiality, > privilege or copyright is not waived or lost because this email has been sent > to you by mistake. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com