Re: XFS block size on RBD / EC vs space amplification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, thanxs,

I've seen that.
But is it the only solution, do I have alternatives with my use case, forcing using big blocks client side ?

I've said in XFS (4k bllock size), but perhaps straight in krbd, as it seems the block device is shown as a drive with 512B sectors.

But I don't really know how to interpret that :

sudo lsblk -o PHY-SEC,MIN-IO,OPT-IO /dev/rbd0
PHY-SEC MIN-IO OPT-IO
    512  65536  65536

Le 2021-02-03 09:16, Konstantin Shalygin a écrit :
Actually, with last Igor patches default min alloc size for hdd is 4K



k

Sent from my iPhone

On 2 Feb 2021, at 13:12, Gilles Mocellin <gilles.mocellin@xxxxxxxxxxxxxx> wrote:

Hello,

As we know, with 64k for bluestore_min_alloc_size_hdd (I'm only using HDDs),
in certain conditions, especially with erasure coding,
there's a leak of space while writing objects smaller than 64k x k (EC:k+m).

Every object is divided in k elements, written on different OSD.

My main use case is big (40TB) RBD images mounted as XFS filesystems on Linux servers,
exposed to our backup software.
So, it's mainly big files.

My though, but I'd like some other point of view, is that I could deal with the amplification by using bigger block sizes on my XFS filesystems.
Instead of reducing bluestore_min_alloc_size_hdd on all OSDs.

What do you think ?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux