How to handle bluestore fragmentation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Reading the thread "s3 requires twice the space it should use", Boris pointed
out that the fragmentation for the osds is around 0.8-0.9:


> On Thu, Apr 15, 2021 at 8:06 PM Boris Behrens <bb@xxxxxxxxx> wrote:
>> I also checked the fragmentation on the bluestore OSDs and it is around
>> 0.80 - 0.89 on most OSDs. yikes.
>> [root@s3db1 ~]# ceph daemon osd.23 bluestore allocator score block
>> {
>>     "fragmentation_rating": 0.85906054329923576
>> }


And that made me wonder what is the current recommended (and not recommended)
way to handle and reduce the fragmentation of the existing OSDs.

Reading around I would think of tweaking the min_alloc_size_{ssd,hdd} and
redeploying those OSDs, but I was unable to find much else, I wonder what do
people do?


ps. There was another thread that got no replies asking something similar (and
a bunch of other things):
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/3PITWZRNX7RFRQNG33VSNKYGOO2IFMZG/

Attachment: signature.asc
Description: PGP signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux