Re: v0.90 released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/23/14 12:57, René Gallati wrote:
Hello,

so I upgraded my cluster from 89 to 90 and now I get:

~# ceph health
HEALTH_WARN too many PGs per OSD (864 > max 300)

That is a new one. I had too few but never too many. Is this a problem that needs attention, or ignorable? Or is there even a command now to shrink PGs?

The message did not appear before, I currently have 32 OSDs over 8 hosts and 9 pools, each with 1024 PG as was the recommended number according to the OSD * 100 / replica formula, then round to next power of 2. The cluster has been increased by 4 OSDs, 8th host only days before. That is to say, it was at 28 OSD / 7 hosts / 9 pools but after extending it with another host, ceph 89 did not complain.

Using the formula again I'd actually need to go to 2048PGs in pools but ceph is telling me to reduce the PG count now?

formula recommends PG count for all pools, not each pool. So you need about 2048 PGs total distributed by expected pool size.

from http://ceph.com/docs/master/rados/operations/placement-groups/:
"When using multiple data pools for storing objects, you need to ensure that you balance the number of placement groups per pool with the number of placement groups per OSD so that you arrive at a reasonable total number of placement groups that provides reasonably low variance per OSD without taxing system resources or making the peering process too slow."


Kind regards

René
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux