Re: Ceph EC PG calculation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den ons 18 nov. 2020 kl 04:59 skrev Szabo, Istvan (Agoda) <
Istvan.Szabo@xxxxxxxxx>:

> I have this error:
> I have 36 osd and get this:
> Error ERANGE:  pg_num 4096 size 6 would mean 25011 total pgs, which
> exceeds max 10500 (mon_max_pg_per_osd 250 * num_in_osds 42)
> I have 4:2 data EC pool, and the others are replicated.
>


> pool 12 'sin.rgw.buckets.data' erasure profile data-ec size 6 min_size 5
> crush_rule 3 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn
> last_change 604 flags hashpspool,ec_overwrites stripe_width 16384
> application rgw
>

Don't forget EC 4+2 means every PG in the pool becomes 4+2 PGs out on the
OSDs, so 4096*6  is 25k, and if you aim for something like the recommended
100-200 PGs per OSD, you should sum the PGs (taking replication factor and
K+M size into account!) and try to end up at 3600-7200 PGs as the sum for
ALL pools.

Setting 4096 on a size 6 pool would in itself go way past this alone,
regardless of the other pools.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux