Thanks for your response. So... if I configured 3 PGs for the pool, would they necessarily each have their primary on a different OSD, thus spreading the load? Or, would it be better to have more PGs to ensure an even distribution?
--
I was also wondering about the pros and cons performance wise of having a pool size of 3 vs 2. It seems there would be a benefit for reads (1.5 times the bandwidth) but a penalty for writes because the primary has to forward to 2 nodes instead of 1. Does that make sense?
-Roland
On Thu, May 5, 2016 at 4:13 PM, Michael Shuey <shuey@xxxxxxxxxxx> wrote:
Reads will be limited to 1/3 of the total bandwidth. A set of PGs has
a "primary" - that's the first one (and only one, if it's up & in)
consulted on a read. The other PGs will still exist, but they'll only
take writes (and only after the primary PG forwards along data). If
you have multiple PGs, reads (and write-mastering duties) will be
spread across all 3 servers.
--
Mike Shuey
> _______________________________________________
On Thu, May 5, 2016 at 5:36 PM, Roland Mechler <rmechler@xxxxxxxxxxx> wrote:
> Let's say I have a small cluster (3 nodes) with 1 OSD per node. If I create
> a pool with size 3, such that each object in the pool will be replicated to
> each OSD/node, is there any reason to create the pool with more than 1 PG?
> It seems that increasing the number of PGs beyond 1 would not provide any
> additional benefit in terms of data balancing or durability, and would have
> a cost in terms of resource usage. But when I try this, I get a "pool <pool>
> has many more objects per pg than average (too few pgs?)" warning from ceph
> health. Is there a cost to having a large number of objects per PG?
>
> -Roland
>
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
Mobile: | 604-727-5257 | Email: | rmechler@xxxxxxxxx | OpenDNS Vancouver 675 West Hastings St, Suite 500 Vancouver, BC V6B 1N2 Canada |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com