Re: Upgrade from Giant 0.87-1 to Hammer 0.94-1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 16/04/2015, at 11.09, Christian Balzer <chibi@xxxxxxx> wrote:
> 
> On Thu, 16 Apr 2015 10:46:35 +0200 Steffen W Sørensen wrote:
> 
>>> That later change would have _increased_ the number of recommended PG,
>>> not decreased it.
>> Weird as our Giant health status was ok before upgrading to Hammer…
>> 
> I'm pretty sure the "too many" check was added around then, and the the
> "too little" warning one earlier.
Okay, might explain why too many shown up now :)

>> It seems to me that you increased PG counts assuming that the formula
>>> is per pool.
>> Well yes maybe, believe we bumped PGs per status complain in Giant
>> mentioned explicit different pool names, eg. too few PGs in <pool-name>…
> Probably something like "less then 20 PGs" or some such, right?
Properly yes, at least fewer than what seemed good for proper distribution

> Your cluster (OSD count) needs (should really, it is not a hard failure
> but a warning) to be high enough to satisfy the minimum amount of PGs, so
> (too) many pools with a small cluster will leave you between a rock and hard place.
Right, maybe pgcalc should mention/explain a bit on considering #pools ahead as well... 

>> - are you saying one needs to consider in advance #pools in a cluster
>> and factor this in when calculating the number of PGs?
>> 
> Yes. Of course the idea is that pools consume space, so if you have many,
> you also will have more OSDs to spread your PGs around.
In this case we wanted to test out radosgw & S3 and thus needed to create the required number of pools which increased #PGs
But so far not real any data in GW pools as it failed working for our AS3 compatible App. Now we removed those pools again.
And are back down to 4 pool. two for ceph FS and two for RBD images, each with 1024 PGs, but still to many PGs, will try to consolidate the two RBD pools into one or two new with fewer PGs…

>> - If so, how to decide which pool gets what #PG, as this is set per
>> pool, especially if one can’t precalc the amount objects ending up in
>> each pool?
> Dead reckoning. 
> As in, you should have some idea which pool is going to receive how much data.
> 
> Certainly, but unless you have a large enough cluster and pools that have
> predictable utilization, fewer pools are the answer.
becasuse this makes it easier to match PGs against #OSDs I see

It would be nice somehow if #PGs could be decoupled from pools, but then against how to figure out where each pools object are…
Just convient to be have all data from a single App in a seperate pool/name space to easily see usage and perform management tasks :/

> It is for me, as I have clusters of similar small size and only one type
> of usage, RBD images. So they have 1 or 2 pools and that's it.
> 
> This also results in the smoothest data distribution possible of course.
Right, thanks 4 sharing!

/Steffen
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux