Hello,
As we know, with 64k for bluestore_min_alloc_size_hdd (I'm only using
HDDs),
in certain conditions, especially with erasure coding,
there's a leak of space while writing objects smaller than 64k x k
(EC:k+m).
Every object is divided in k elements, written on different OSD.
My main use case is big (40TB) RBD images mounted as XFS filesystems on
Linux servers,
exposed to our backup software.
So, it's mainly big files.
My though, but I'd like some other point of view, is that I could deal
with the amplification by using bigger block sizes on my XFS
filesystems.
Instead of reducing bluestore_min_alloc_size_hdd on all OSDs.
What do you think ?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx