Re: Questions about pg num setting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Have you had a look at http://ceph.com/pgcalc/?

Generally if you have too many PGs per OSD you can get yourself into
trouble during recovery and backfilling operations consuming a lot
more RAM than you have and eventually making your cluster unusable
(some more info can be found here for example:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013614.html
but there are other threads on the ML).
Also currently you cannot reduce the number of PGs for a pool so you
are much better of starting with a lower value and then gradually
increasing it.

The fact that the ceph developers introduced a config option which
prevents users from increasing the number of PGs if it exceeds the
configured limit should be a tell-tale sign that having too many PGs
per OSD is considered a problem (see also
https://bugzilla.redhat.com/show_bug.cgi?id=1489064 and linked PRs)

On Wed, Dec 27, 2017 at 3:15 PM, 于相洋 <penglaiyxy@xxxxxxxxx> wrote:
> Hi cephers,
>
> I have two questions about pg number setting.
>
> First :
> My storage informaiton is show as belows:
> HDD: 10 * 8TB
> CPU: Intel(R) Xeon(R) CPU E5645 @ 2.40GHz (24 cores)
> Memery: 64GB
>
> As my HDD capacity and my Mem is too large, so I want to set as many
> as 300 pgs to each OSD. Although 100 pgs per OSD is perferred. I want
> to know what is the disadvantage of setting too many pgs?
>
>
> Second:
> At begin ,I can not judge the capacity proportion of my workloads, so
> I can not set accurate pg numbers of different pools. How many pgs
> should I set for each pools first?
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux