In some common cases (when you have lots of objects per pg) ceph will warn about it.
2018-01-03 11:10 GMT+01:00 Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>:
Is there a disadvantage to just always start pg_num and pgp_num with
something low like 8, and then later increase it when necessary?
Question is then how to identify when necessary
-----Original Message-----
From: Christian Wuerdig [mailto:christian.wuerdig@gmail.com ]
Sent: dinsdag 2 januari 2018 19:40
To: 于相洋
Cc: Ceph-User
Subject: Re: Questions about pg num setting
Have you had a look at http://ceph.com/pgcalc/?
Generally if you have too many PGs per OSD you can get yourself into
trouble during recovery and backfilling operations consuming a lot more
RAM than you have and eventually making your cluster unusable (some more
info can be found here for example:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/ 2016-October/013614.html
but there are other threads on the ML).
Also currently you cannot reduce the number of PGs for a pool so you are
much better of starting with a lower value and then gradually increasing
it.
The fact that the ceph developers introduced a config option which
prevents users from increasing the number of PGs if it exceeds the
configured limit should be a tell-tale sign that having too many PGs per
OSD is considered a problem (see also
https://bugzilla.redhat.com/show_bug.cgi?id=1489064 and linked PRs)
On Wed, Dec 27, 2017 at 3:15 PM, 于相洋 <penglaiyxy@xxxxxxxxx> wrote:
> Hi cephers,
>
> I have two questions about pg number setting.
>
> First :
> My storage informaiton is show as belows:
> HDD: 10 * 8TB
> CPU: Intel(R) Xeon(R) CPU E5645 @ 2.40GHz (24 cores)
> Memery: 64GB
>
> As my HDD capacity and my Mem is too large, so I want to set as many
> as 300 pgs to each OSD. Although 100 pgs per OSD is perferred. I want
> to know what is the disadvantage of setting too many pgs?
>
>
> Second:
> At begin ,I can not judge the capacity proportion of my workloads, so
> I can not set accurate pg numbers of different pools. How many pgs
> should I set for each pools first?
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo
> info at http://vger.kernel.org/majordomo-info.html
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
May the most significant bit of your life be positive.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com