On Thu, 14 Feb 2013, Travis Rhoden wrote:
> Hi folks,
>
> Looking at the docs at [1], I see the following advice:
>
> "When using multiple data pools for storing objects, you need to ensure that
> you balance the number of placement groups per pool with the number of
> placement groups per OSD so that you arrive at a reasonable total number of
> placement groups that provides reasonably low variance per OSD without
> taxing system resources or making the peering process too slow."
>
> Can someone expound on this a little bit more for me? Does it mean that if
> I am going to create 3 or 4 pools, all being used heavily, that perhaps I
> should *not* go with the recommended value of PG = (#OSDs * 100)/replicas?
> For example, I have 60 OSDs. With two replicas, that gives me 3000 PGs. I
> have read that there may be some benefit to using a power of two, so I was
> considering making this 4096. If I do this for 3 or 4 pools, is that too
> much? That's what I"m really missing -- how to know when my balance is off
> and I've really set up too many PGs, or too many PGs per OSD.
That "PG" should probably read "total PGs". So, device by 3 or 4.
Unfortunately, though, there is a <facepalm> in the placement code that
makes the placement of PGs for different pools overlap heavily; that will
get fixed in cuttlefish. So if the cluster is large, the data
distribution will degrade somewhat if there are lots of overlapping pools.
For now, I would recommend splitting the difference.
> Somewhat related -- I have one Ceph cluster that is unlikely to ever use
> CephFS. As such, I don't need the metadata pool at all. Is it safe to
> delete? That would regain me some PGs, and could lighten the load during
> the peering process, I suppose.
Yeah, you can delete them. If you ever started a ceph-mds, there are a
few semi-documented commands to clean out the mdsmap to make 'ceph health'
happy.
sage
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com