On 11/23/2015 07:19 PM, Gregory Farnum wrote: > On Fri, Nov 20, 2015 at 11:33 AM, Simon Engelsman <simon@xxxxxxxxxxxx> wrote: >> [cut] > In addition to what Robert said, it sounds like you've done something > strange with your CRUSH map. Do you have separate trees for the SSDs > and hard drives, or are they both under the same host buckets? > > You'll also want to dig into more general config stuff like PG counts etc. > -Greg > _______________________________________________ > Thank you for your replies. We will update the backfill_scan parameters. I must have misunderstood the documentation on this matter. Yes, I think we have to increase the PGs. We have two pools, with two seperate trees in the crushmap. So we have two different hostnames per node, one for both trees and disabled the crush hostname update on osd start (osd crush update on start = false) In general we will do those updates: - adjust our monitoring for max fill from 85% to 70% - update backfill_scan parameters - add memory - upgrade to infernalis - upgrade from straw to straw2 - add pg's (slowly) I hope these measure make sense and lower the impact of recovery. If somebody has additions please let me know. Regards, Mart van Santen -- Mart van Santen Greenhost E: mart@xxxxxxxxxxxx T: +31 20 4890444 W: https://greenhost.nl A PGP signature can be attached to this e-mail, you need PGP software to verify it. My public key is available in keyserver(s) see: http://tinyurl.com/openpgp-manual PGP Fingerprint: CA85 EB11 2B70 042D AF66 B29A 6437 01A1 10A3 D3A5
Attachment:
signature.asc
Description: OpenPGP digital signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com