Re: proxmox hyperconverged pg calculations in ceph pacific, pve 7.2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> Hello,
> 
> still do not really understand why this error message comes up.
> The error message contains two significant numbers. The first one which is easy to understand is the maximal value of pgs for each osd a precompiled config variable (mon_max_pg_per_osd). The value on my cluster is 250.

This is somewhat subtle.  This limit is not sum(pg_num) / #OSDs, it’s the maximum number of PGs that can be present on a given OSD.  Which means, as Frank noted, *shards* / replicas, since each PG is located on multiple OSDs.

> This value is multiplied by the total number of osds (88): The result is the value of maximal 22000 pgs for the whole cluster (based on mon_max_pg_per_osd).

22000 PG *shards/replicas*.  It would be less confusing if we had a different term there.

> Well as said the whole cluster does only have 2592 pgs at the moment and I wanted to add another 512+128 pgs.


> 
> Then there is the second value "22148". The meaning can only be pgs since this value is beeing compared to the total number of pgs for the whole cluster (based on "mon_max_pg_per_osd").
> 
> One solution to that problem could be that mon_max_pg_per_osd does not as the named suggests mean: max pgs/osd, but instead max shards/osd. If this is true, then multiplying my pgs by the number of shards (5+3 for me) would be a value higher than 22000 if I add the shards for the new pool I wanted to create.

You have the idea.  But remember when using replication, PGs aren’t sharded.

> If this is true, then  the chosen naming of mon_max_pg_per_osd would simply be misleading.

I wouldn’t say misleading, but perhaps *ambiguous*.  With the default value, each OSD in fact can host 250 PGs. Which “per” sort of connotes.

> 
> Thanks
> Rainer
> 
> Am 01.12.22 um 19:00 schrieb Eugen Block:
> 
>>> "got unexpected control message: TASK ERROR: error with 'osd pool create': mon_command failed -  pg_num 512 size 8 would mean 22148 total pgs, which exceeds max 22000 (mon_max_pg_per_osd 250 * num_in_osds 88)"
>>> 
>>> I also tried the direct way to create a new pool using:
>>> ceph osd pool create <pool> 512 128 erasure <profile> but the error message below remains.
>>> 
>>> What I do not understand now are the calculations behind the scenes for the calculated total pg number of 22148. How is this total number "22148"  calculated?
>>> 
>>> I already reduced the number of pgs for the metadata pool of each ec-pool and so I was able to create 4 pools in this way. But just for fun I now tried to create ec-pool number 5 and I see the message from above again.
>>> 
>>> Here are the pools created by now (scraped from ceph osd pool autoscale-status):
>>> Pool:                Size:   Bias:  PG_NUM:
>>> rbd                  4599    1.0      32
>>> px-a-data          528.2G    1.0     512
>>> px-a-metadata      838.1k    1.0     128
>>> px-b-data              0     1.0     512
>>> px-b-metadata         19     1.0     128
>>> px-c-data              0     1.0     512
>>> px-c-metadata         19     1.0     128
>>> px-d-data              0     1.0     512
>>> px-d-metadata          0     1.0     128
>>> 
>>> So the total number of pgs for all pools is currently 2592 which is far from 22148 pgs?
>>> 
>>> Any ideas?
>>> Thanks Rainer
>>> -- 
>>> Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse  1
>>> 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
>>> PGP: http://www.uni-koblenz.de/~krienke/mypgp.html,     Fax: +49261287 1001312
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> -- 
> Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
> 56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312
> Web: http://userpages.uni-koblenz.de/~krienke
> PGP: http://userpages.uni-koblenz.de/~krienke/mypgp.html
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux