Re: Upgrade from Giant 0.87-1 to Hammer 0.94-1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 16 Apr 2015 10:46:35 +0200 Steffen W Sørensen wrote:

> > That later change would have _increased_ the number of recommended PG,
> > not decreased it.
> Weird as our Giant health status was ok before upgrading to Hammer…
> 
I'm pretty sure the "too many" check was added around then, and the the
"too little" warning one earlier.

> > With your cluster 2048 PGs total (all pools combined!) would be the
> > sweet spot, see:
> > 
> > http://ceph.com/pgcalc/ <http://ceph.com/pgcalc/>
> Had read this originally when creating the cluster
> 
> > It seems to me that you increased PG counts assuming that the formula
> > is per pool.
> Well yes maybe, believe we bumped PGs per status complain in Giant
> mentioned explicit different pool names, eg. too few PGs in <pool-name>…
Probably something like "less then 20 PGs" or some such, right?

> so we naturally bumped mentioned pools slightly up til next 2-power
> until health stop complaining and yes we wondered over this relative
> high number of PGs in total for the cluster, as we initially had read
> pgcalc and thought we understood this.
>

Your cluster (OSD count) needs (should really, it is not a hard failure
but a warning) to be high enough to satisfy the minimum amount of PGs, so
(too) many pools with a small cluster will leave you between a rock and
hard place.

> ceph.com <http://ceph.com/> not responsding presently…
> 
It's being DoS'ed last I heard.

> - are you saying one needs to consider in advance #pools in a cluster
> and factor this in when calculating the number of PGs?
> 
Yes. Of course the idea is that pools consume space, so if you have many,
you also will have more OSDs to spread your PGs around.

> - If so, how to decide which pool gets what #PG, as this is set per
> pool, especially if one can’t precalc the amount objects ending up in
> each pool?
> 

Dead reckoning. 
As in, you should have some idea which pool is going to receive how much
data.

> But yes understand also that more pools means more PGs per OSD, does
> this imply using different pools to segregate various data f.ex. per
> application in same cluster is a bad idea?
> 
It certainly can be.

> Using pools as sort of name space segregation makes it easy f.e. to
> remove/migration data per application and thus a handy segregation tool
> ImHO.
>
Certainly, but unless you have a large enough cluster and pools that have
predictable utilization, fewer pools are the answer.
 
> - Are the BCP to consolidate data in few pools per cluster?
>

It is for me, as I have clusters of similar small size and only one type
of usage, RBD images. So they have 1 or 2 pools and that's it.

This also results in the smoothest data distribution possible of course.

Christian

> /Steffen

-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux