How does CEPH calculates PGs per OSD for erasure coded (EC) pools?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



For replicated pools (w/o rounding to nearest power of two) overall
PGs number is calculated so:

    Pools_PGs = 100 * (OSDs / Pool_Size),

where
    100 -- target number of PGs per single OSD related to that pool,
    Pool_Size -- factor showing how much raw storage would in fact be
used to store one logical data unit.

By analogy I can suppose that with EC pools corresponding Pool_Size
can be calculated so:

    Raw_Storage_Use / Logical_Storage_Use

or, using EC semantics, (k + m) / k. And for EC (k=2, m=1) it gives:

    Raw_Storage_Use = 3
    Logical_Storage_Use = 2

-- Hence, Pool_Size should be 1.5.

OTOH, CEPH documentation says that about same EC pool (underline is mine):

    "It is equivalent to a replicated pool of size __two__ but
requires 1.5TB instead of 2TB to store 1TB of data"

So how does CEPH calculate PGs distribution per OSD for it?
Using (k + m) / k? Or just k? Or differently at all?

-- 
End of message. Next message?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux