Hello, thank you for your response.
Erasure Coding gets better and we really cannot afford the storage
overhead of x3 replication.
Anyway, as I understand the problem, it is also present with
replication, just less amplified (blocks are not divided between OSDs,
just replicated fully).
Le 2021-02-02 16:50, Steven Pine a écrit :
You are unlikely to avoid the space amplification bug by using larger
block sizes. I honestly do not recommend using an EC pool, it is
generally less performant and EC pools are not as well supported by
the ceph development community.
On Tue, Feb 2, 2021 at 5:11 AM Gilles Mocellin
<gilles.mocellin@xxxxxxxxxxxxxx> wrote:
Hello,
As we know, with 64k for bluestore_min_alloc_size_hdd (I'm only
using
HDDs),
in certain conditions, especially with erasure coding,
there's a leak of space while writing objects smaller than 64k x k
(EC:k+m).
Every object is divided in k elements, written on different OSD.
My main use case is big (40TB) RBD images mounted as XFS filesystems
on
Linux servers,
exposed to our backup software.
So, it's mainly big files.
My though, but I'd like some other point of view, is that I could
deal
with the amplification by using bigger block sizes on my XFS
filesystems.
Instead of reducing bluestore_min_alloc_size_hdd on all OSDs.
What do you think ?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Steven Pine
E steven.pine@xxxxxxxxxx | P 516.938.4100 x
Webair | 501 Franklin Avenue Suite 200, Garden City NY, 11530
webair.com [1]
[2] [3] [4]
NOTICE: This electronic mail message and all attachments transmitted
with it are intended solely for the use of the addressee and may
contain legally privileged proprietary and confidential information.
If the reader of this message is not the intended recipient, or if you
are an employee or agent responsible for delivering this message to
the intended recipient, you are hereby notified that any
dissemination, distribution, copying, or other use of this message or
its attachments is strictly prohibited. If you have received this
message in error, please notify the sender immediately by replying to
this message and delete it from your computer.
Links:
------
[1] http://webair.com
[2] https://www.facebook.com/WebairInc/
[3] https://twitter.com/WebairInc
[4] https://www.linkedin.com/company/webair
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx