Re: cephfs small files expansion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, RBD volumes are typically much larger than bluestore_min_alloc_size.  
Typically your client filesystem is built *within* an RBD volume, but to Ceph it’s a single, monolithic image.

> On Sep 14, 2021, at 7:27 AM, Sebastien Feminier <sebastien.feminier@xxxxxxxxxxxxxxx> wrote:
> 
> 
> thanks josh , my cluster is octopus on hdd (for testing) ,so i have to re-create OSDs  and change  bluestore_min_alloc_size before creating OSDs ?
> Is this  normal that my rbd pool does not having size amplification ?  
> 
>> Hey Seb,
>> 
>>> I have a test cluster on which I created pools rbd and cephfs (octopus), when
>>> I copy a directory containing many small files on a pool rbd the USED part of
>>> the ceph df command seems normal on the other hand on cephfs the USED part
>>> seems really abnormal, I tried to change the blocksize
>>> bluestore_min_alloc_size but it didn't change anything, would the solution be
>>> to re-create the pool or outright the OSDs?
>> 
>> bluestore_min_alloc_size has an effect only at OSD creation time; if
>> you changed it after creating the OSDs, it will have had no effect
>> yet. If your pool is on HDDs and this is pre-Pacific, then the default
>> of 64k will have a huge amplification effect for small objects.
>> 
>> Josh
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux