Hi Igor, Given the patch histories and the rejection of the previous patch for the one in favor of defaulting to 4k block size, does this essentially mean ceph does not support higher block sizes when using erasure coding? Will the ceph project be updating their documentation and references to let everyone know that larger blocks don't interact with EC pools as intended? Sincerely, On Mon, Jul 20, 2020 at 9:06 AM Igor Fedotov <ifedotov@xxxxxxx> wrote: > Hi Mateusz, > > I think you might be hit by: > > https://tracker.ceph.com/issues/44213 > > > This is fixed in upcoming Pacific release. Nautilus/Octopus backport is > under discussion for now. > > > Thanks, > > Igor > > On 7/18/2020 8:35 AM, Mateusz Skała wrote: > > Hello Community, > > I would like to ask about help in explanation situation. > > There is Rados gateway with EC pool profile k=6 m=4. So it shoud take > > something about 1.4 - 2.0 data usage more from raw data if I’m correct. > > Rados df shows me: > > 116 TiB used and WR 26 TiB > > Can You explain this? It is about 4.5*WR used data. Why? > > Regards > > Mateusz Skała > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > -- Steven Pine webair.com *P* 516.938.4100 x *E * steven.pine@xxxxxxxxxx <https://www.facebook.com/WebairInc/> <https://www.linkedin.com/company/webair> _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx