Re: Small cluster PG question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Reads will be limited to 1/3 of the total bandwidth.  A set of PGs has
a "primary" - that's the first one (and only one, if it's up & in)
consulted on a read.  The other PGs will still exist, but they'll only
take writes (and only after the primary PG forwards along data).  If
you have multiple PGs, reads (and write-mastering duties) will be
spread across all 3 servers.


--
Mike Shuey


On Thu, May 5, 2016 at 5:36 PM, Roland Mechler <rmechler@xxxxxxxxxxx> wrote:
> Let's say I have a small cluster (3 nodes) with 1 OSD per node. If I create
> a pool with size 3, such that each object in the pool will be replicated to
> each OSD/node, is there any reason to create the pool with more than 1 PG?
> It seems that increasing the number of PGs beyond 1 would not provide any
> additional benefit in terms of data balancing or durability, and would have
> a cost in terms of resource usage. But when I try this, I get a "pool <pool>
> has many more objects per pg than average (too few pgs?)" warning from ceph
> health. Is there a cost to having a large number of objects per PG?
>
> -Roland
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux