Re: EC profile datastore usage - question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Steven,

IMO your statement about "not supporting higher block sizes" is too strong. From my experience excessive space usage for EC pools tend to depend on write access pattern, input block sizes and/or object sizes. Hence I'm pretty sure this issue isn't present/visible for every cluster with alloc size > 4K and EC pool enabled. E.g. one wouldn't notice it when the majority of objects are large enough and/or written just once.

But honestly I haven't performed that comprehensive investigation on the topic.

And perhaps you're right and we should cover the issue in the documentation. Please feel free to fire corresponding ticket in Ceph tracker...


Thanks,

Igor


On 7/20/2020 7:02 PM, Steven Pine wrote:
Hi Igor,

Given the patch histories and the rejection of the previous patch for the one in favor of defaulting to 4k block size, does this essentially mean ceph does not support higher block sizes when using erasure coding? Will the ceph project be updating their documentation and references to let everyone know that larger blocks don't interact with EC pools as intended?

Sincerely,

On Mon, Jul 20, 2020 at 9:06 AM Igor Fedotov <ifedotov@xxxxxxx <mailto:ifedotov@xxxxxxx>> wrote:

    Hi Mateusz,

    I think you might be hit by:

    https://tracker.ceph.com/issues/44213


    This is fixed in upcoming Pacific release. Nautilus/Octopus
    backport is
    under discussion for now.


    Thanks,

    Igor

    On 7/18/2020 8:35 AM, Mateusz Skała wrote:
    > Hello Community,
    > I would like to ask about help in explanation situation.
    > There is Rados gateway with EC pool profile k=6 m=4. So it shoud
    take
    > something about 1.4 - 2.0 data usage  more from raw data if I’m
    correct.
    > Rados df shows me:
    > 116 TiB used and WR 26 TiB
    > Can You explain this? It is about 4.5*WR used data. Why?
    > Regards
    > Mateusz Skała
    > _______________________________________________
    > ceph-users mailing list -- ceph-users@xxxxxxx
    <mailto:ceph-users@xxxxxxx>
    > To unsubscribe send an email to ceph-users-leave@xxxxxxx
    <mailto:ceph-users-leave@xxxxxxx>
    _______________________________________________
    ceph-users mailing list -- ceph-users@xxxxxxx
    <mailto:ceph-users@xxxxxxx>
    To unsubscribe send an email to ceph-users-leave@xxxxxxx
    <mailto:ceph-users-leave@xxxxxxx>



--
Steven Pine
webair.com <http://webair.com>

	
*P* 516.938.4100 x

*E * steven.pine@xxxxxxxxxx <mailto:steven.pine@xxxxxxxxxx>

<https://www.facebook.com/WebairInc/> <https://www.linkedin.com/company/webair>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux