health_err on osd full

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Greg.  I appreciate the advice, and very quick replies too :)


On 18 July 2014 23:35, Gregory Farnum <greg at inktank.com> wrote:

> On Fri, Jul 18, 2014 at 3:29 PM, James Eckersall
> <James.Eckersall at fasthosts.com> wrote:
> > Thanks Greg.
> >
> > Can I suggest that the documentation makes this much clearer?  It might
> just be me, but I couldn't glean this from the docs, so I expect I'm not
> the only one.
> >
> > Also, can I clarify how many pg's you would suggest is a decent number
> for my setup?
> >
> > 80 OSD's across 4 nodes.  5 pools.
> > I'm averaging 38 PG's per OSD and from the online docs and older posts
> on this list, I think I should be aiming for between 50 and 100?
> >
> > I'm hoping that by only having 38 PG's per OSD, that is the cause of the
> uneven distribution and that can be fairly easily rectified.
>
> That seems likely. The general formula to get a baseline is
> (100*OSDs/replication count) when using one pool. It's also generally
> better to err on the side of more PGs than fewer; they have a cost but
> OSDs can usually scale into the high thousands of PGs, so I personally
> prefer people to go a little higher than that. You'll also want to
> adjust things so that the pools with more data get more PGs than the
> ones with much less, or they won't do you much good.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140718/3a4bdd6d/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux