Re: proxmox hyperconverged pg calculations in ceph pacific, pve 7.2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

you need to take into account the number of replicas as well. With 88 OSDs and the default max PGs per OSD of 250 you get the mentioned 22000 PGs (including replica): 88 x 250 = 22000. With EC pools each chunk counts as one replica. So you should consider shrinking your pools or let autoscaler do that for you.

Regards
Eugen

Zitat von Rainer Krienke <krienke@xxxxxxxxxxxxxx>:

Hello,

I run a a hyperconverged pve cluster (V7.2) with 11 nodes. Each node has 8 4TB disks. pve and ceph are installed an running.

I wanted to create some ceph-pools with each 512 pgs. Since I want to use erasure coding (5+3) when creating a pool one rbd pool for metadata and the data pool are created. I used pveceph pool this command:

pveceph pool create px-e --erasure-coding k=5,m=3 --pg_autoscale_mode off --pg_num 512 --pg_num_min 128

I was able to create two pools in this way but the third pveceph call threw this error:

"got unexpected control message: TASK ERROR: error with 'osd pool create': mon_command failed - pg_num 512 size 8 would mean 22148 total pgs, which exceeds max 22000 (mon_max_pg_per_osd 250 * num_in_osds 88)"

I also tried the direct way to create a new pool using:
ceph osd pool create <pool> 512 128 erasure <profile> but the error message below remains.

What I do not understand now are the calculations behind the scenes for the calculated total pg number of 22148. How is this total number "22148" calculated?

I already reduced the number of pgs for the metadata pool of each ec-pool and so I was able to create 4 pools in this way. But just for fun I now tried to create ec-pool number 5 and I see the message from above again.

Here are the pools created by now (scraped from ceph osd pool autoscale-status):
Pool:                Size:   Bias:  PG_NUM:
rbd                  4599    1.0      32
px-a-data          528.2G    1.0     512
px-a-metadata      838.1k    1.0     128
px-b-data              0     1.0     512
px-b-metadata         19     1.0     128
px-c-data              0     1.0     512
px-c-metadata         19     1.0     128
px-d-data              0     1.0     512
px-d-metadata          0     1.0     128

So the total number of pgs for all pools is currently 2592 which is far from 22148 pgs?

Any ideas?
Thanks Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse  1
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 1001312
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux