Use k+m for PG calculation, that value also shows up as "erasure size" in ceph osd pool ls detail The important thing here is on how many OSDs the PG shows up. And the EC PG shows up on all k+m OSDs. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Sun, Apr 28, 2019 at 9:41 AM Igor Podlesny <ceph-user@xxxxxxxx> wrote: > > For replicated pools (w/o rounding to nearest power of two) overall > PGs number is calculated so: > > Pools_PGs = 100 * (OSDs / Pool_Size), > > where > 100 -- target number of PGs per single OSD related to that pool, > Pool_Size -- factor showing how much raw storage would in fact be > used to store one logical data unit. > > By analogy I can suppose that with EC pools corresponding Pool_Size > can be calculated so: > > Raw_Storage_Use / Logical_Storage_Use > > or, using EC semantics, (k + m) / k. And for EC (k=2, m=1) it gives: > > Raw_Storage_Use = 3 > Logical_Storage_Use = 2 > > -- Hence, Pool_Size should be 1.5. > > OTOH, CEPH documentation says that about same EC pool (underline is mine): > > "It is equivalent to a replicated pool of size __two__ but > requires 1.5TB instead of 2TB to store 1TB of data" > > So how does CEPH calculate PGs distribution per OSD for it? > Using (k + m) / k? Or just k? Or differently at all? > > -- > End of message. Next message? > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com