Re: Maximizing OSD to PG quantity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dan

You can increase - not decrease .. I would go with 512 for this - that
will allow you to increase in the future.

from ceph.com "Having 512 or 4096 Placement Groups is roughly
equivalent in a cluster with less than 50 OSDs "

I don't even think you will be able to set pg num go 4096 - ceph will
complain about too many pgs per osd.

You can update pgs to a higher number, as I said, but also from
ceph.com "However, increasing the PG Count of a pool is one of the
most impactful events in a Ceph Cluster, and should be avoided for
production clusters if possible.

B

On Tue, Apr 5, 2016 at 3:48 PM,  <dan@xxxxxxxxxxxxxxxxx> wrote:
> In a 12 OSD setup, the following config is there:
>
>            (OSDs * 100)
> Total PGs = ----------
>              pool size
>
>
> So with 12 OSD's and a pool size of 2 replicas, this would equal Total PGs
> of 600 as per this url:
>
> http://docs.ceph.com/docs/master/rados/operations/placement-groups/#preselection
>
> Yet in the same page, at the top it says:
>
> Between 10 and 50 OSDs set pg_num to 4096
>
> Our use is for shared hosting so there are lots of small writes and reads.
> Which of these would be correct?
>
> Also is it a simple process to update PGs on a live system without affecting
> service?
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux