Re: How does CEPH calculates PGs per OSD for erasure coded (EC) pools?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 28 Apr 2019 at 16:14, Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
> Use k+m for PG calculation, that value also shows up as "erasure size"
> in ceph osd pool ls detail

So does it mean that for PG calculation those 2 pools are equivalent:

1) EC(4, 2)
2) replicated, size 6

? Sounds weird to be honest. Replicated with size 6 means each logical
data is stored 6 times, what needed single PG now requires 6 PGs.
And with EC(4, 2) there's still only 1.5 overhead in terms of raw
occupied space -- how come PG calculation distribution needs adjusting
to 6 instead of 1.5 then?

Also, why does CEPH documentation say "It is equivalent to a
replicated pool of size __two__" when describing EC(2, 1) example?

-- 
End of message. Next message?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux