Re: proxmox hyperconverged pg calculations in ceph pacific, pve 7.2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Rainer,

there is indeed a bit of a mess in terminology. The number mon_max_pg_per_osd means "the maximum number of PGs an OSD is a member of", which is equal to "the number of PG shards an OSD holds". Unfortunately, this confusion is endemic in the entire documentation and one needs to look very hard at the context to see if PG or PG shard=PG membership is meant.

A similar confusion is about the term "[ceph] user", which sometimes means unprivileged end-point user and sometimes privileged storage admin. I had serious security discussions over here due to this confusion.

Both dual-uses are legacy and very hard to clean up in the docs.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Rainer Krienke <krienke@xxxxxxxxxxxxxx>
Sent: 02 December 2022 12:44:26
To: ceph-users@xxxxxxx
Subject:  Re: proxmox hyperconverged pg calculations in ceph pacific, pve 7.2

Hello,

still do not really understand why this error message comes up.
The error message contains two significant numbers. The first one which
is easy to understand is the maximal value of pgs for each osd a
precompiled config variable (mon_max_pg_per_osd). The value on my
cluster is 250. This value is multiplied by the total number of osds
(88): The result is the value of maximal 22000 pgs for the whole cluster
(based on mon_max_pg_per_osd).

Well as said the whole cluster does only have 2592 pgs at the moment and
I wanted to add another 512+128 pgs.

Then there is the second value "22148". The meaning can only be pgs
since this value is beeing compared to the total number of pgs for the
whole cluster (based on "mon_max_pg_per_osd").

One solution to that problem could be that mon_max_pg_per_osd does not
as the named suggests mean: max pgs/osd, but instead max shards/osd. If
this is true, then multiplying my pgs by the number of shards (5+3 for
me) would be a value higher than 22000 if I add the shards for the new
pool I wanted to create.
If this is true, then  the chosen naming of mon_max_pg_per_osd would
simply be misleading.

Thanks
Rainer

Am 01.12.22 um 19:00 schrieb Eugen Block:

>> "got unexpected control message: TASK ERROR: error with 'osd pool
>> create': mon_command failed -  pg_num 512 size 8 would mean 22148
>> total pgs, which exceeds max 22000 (mon_max_pg_per_osd 250 *
>> num_in_osds 88)"
>>
>> I also tried the direct way to create a new pool using:
>> ceph osd pool create <pool> 512 128 erasure <profile> but the error
>> message below remains.
>>
>> What I do not understand now are the calculations behind the scenes
>> for the calculated total pg number of 22148. How is this total number
>> "22148"  calculated?
>>
>> I already reduced the number of pgs for the metadata pool of each
>> ec-pool and so I was able to create 4 pools in this way. But just for
>> fun I now tried to create ec-pool number 5 and I see the message from
>> above again.
>>
>> Here are the pools created by now (scraped from ceph osd pool
>> autoscale-status):
>> Pool:                Size:   Bias:  PG_NUM:
>> rbd                  4599    1.0      32
>> px-a-data          528.2G    1.0     512
>> px-a-metadata      838.1k    1.0     128
>> px-b-data              0     1.0     512
>> px-b-metadata         19     1.0     128
>> px-c-data              0     1.0     512
>> px-c-metadata         19     1.0     128
>> px-d-data              0     1.0     512
>> px-d-metadata          0     1.0     128
>>
>> So the total number of pgs for all pools is currently 2592 which is
>> far from 22148 pgs?
>>
>> Any ideas?
>> Thanks Rainer
>> --
>> Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse  1
>> 56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287
>> 1312
>> PGP: http://www.uni-koblenz.de/~krienke/mypgp.html,     Fax: +49261287
>> 1001312
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Tel: +49261287 1312 Fax +49261287 100 1312
Web: http://userpages.uni-koblenz.de/~krienke
PGP: http://userpages.uni-koblenz.de/~krienke/mypgp.html

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux