Re: Does the number of PGs affect the total usable size of a pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think that would only happen if pg_num for a 3R pool were less than like ~ 1/3 the number of OSDs.  Assuming aligned device classes, proper CRUSH rules and topology etc.

Mind you if pg_num is low, the balancer won’t be able to do a great job of distributing data uniformly.  If you set pg_num to a non power of two that complicates things as well since you’d have PGs of very different sizes, but this is something rarely seen.

Sharing `ceph osd tree` and `ceph osd dump | grep pool` and `ceph osd df` would help.

> On Feb 13, 2025, at 1:11 PM, Work Ceph <work.ceph.user.mailing@xxxxxxxxx> wrote:
> 
> Exactly, that is what I am assuming. However, my question is. Can I assume that the PG number will affect the Max available space that a pool will be able to use?
> 
> On Thu, Feb 13, 2025 at 3:09 PM Anthony D'Atri <anthony.datri@xxxxxxxxx <mailto:anthony.datri@xxxxxxxxx>> wrote:
>> Assuming that the pool is replicated, 512 PGs is pretty low if this is the only substantial pool on the cluster.  When you do `ceph osd df`, if this is the only substantial pool, the PGS column at right would average around 12 or 13 which is suuuuper low.  
>> 
>>> On Feb 13, 2025, at 11:40 AM, Work Ceph <work.ceph.user.mailing@xxxxxxxxx <mailto:work.ceph.user.mailing@xxxxxxxxx>> wrote:
>>> 
>>> Yes, the bucket that represents the new host is under the ROOT bucket as the others. Also, the OSDs are in the right/expected bucket.
>>> 
>>> I am guessing that the problem is the number of PGs. I have 120 OSDs across all hosts, and I guess that 512 PGS, which is what the pool is using, is not enough. I did not change it yet, because I wanted to understand the effect on PG number in Ceph pool usable volume.
>>> 
>>> On Thu, Feb 13, 2025 at 12:03 PM Anthony D'Atri <anthony.datri@xxxxxxxxx <mailto:anthony.datri@xxxxxxxxx>> wrote:
>>>> Does the new host show up under the proper CRUSH bucket?  Do its OSDs?  Send `ceph osd tree` please.
>>>> 
>>>> 
>>>> >> 
>>>> >> 
>>>> >>      > Hello guys,
>>>> >>      > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a
>>>> >> single
>>>> >>      > pool that consumes all OSDs of all nodes. After adding another
>>>> >> host, I
>>>> >>      > noticed that no extra space was added. Can this be a result of
>>>> >> the
>>>> >>      > number
>>>> >>      > of PGs I am using?
>>>> >>      >
>>>> >>      > I mean, when adding more hosts/OSDs, should I always consider
>>>> >> increasing
>>>> >>      > the number of PGs from a pool?
>>>> >>      >
>>>> >> 
>>>> >>      ceph osd tree
>>>> >> 
>>>> >>      shows all up and with correct weight?
>>>> >> 
>>>> > 
>>>> > _______________________________________________
>>>> > ceph-users mailing list -- ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx>
>>>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx <mailto:ceph-users-leave@xxxxxxx>
>>>> 
>> 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux