Re: v0.90 released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On 23.12.2014 12:14, Henrik Korkuc wrote:
On 12/23/14 12:57, René Gallati wrote:
Hello,

so I upgraded my cluster from 89 to 90 and now I get:

~# ceph health
HEALTH_WARN too many PGs per OSD (864 > max 300)

That is a new one. I had too few but never too many. Is this a problem
that needs attention, or ignorable? Or is there even a command now to
shrink PGs?

The message did not appear before, I currently have 32 OSDs over 8
hosts and 9 pools, each with 1024 PG as was the recommended number
according to the OSD * 100 / replica formula, then round to next power
of 2. The cluster has been increased by 4 OSDs, 8th host only days
before. That is to say, it was at 28 OSD / 7 hosts / 9 pools but after
extending it with another host, ceph 89 did not complain.

Using the formula again I'd actually need to go to 2048PGs in pools
but ceph is telling me to reduce the PG count now?

formula recommends PG count for all pools, not each pool. So you need
about 2048 PGs total distributed by expected pool size.

from http://ceph.com/docs/master/rados/operations/placement-groups/:
"When using multiple data pools for storing objects, you need to ensure
that you balance the number of placement groups per pool with the number
of placement groups per OSD so that you arrive at a reasonable total
number of placement groups that provides reasonably low variance per OSD
without taxing system resources or making the peering process too slow."

Ah I've seem to have overlooked this. Lucky for me, I had 5 pools exclusively for testing purposes and another that was not in use - killing those put me under the complaint threshold.

In this case, Giant 90 is the first version that actually complains about too many PGs per OSD it appears.

What I don't like that much about this "soft limitation" is the fact that PGs are defined per pool, which means that just adding a new pool is not as straight forward as I thought it was. If you are already somewhere near the limit, all you can do is make a new pool with low PG count, thus potentially make that pool less well distributed than all the pools that came before. But perhaps the overhead incurred with higher PG numbers isn't that bad anyway - after all it ran well up until now.

Kind regards

René
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux