Re: CephFS and many small files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Paul!

Thanks for your answer. Yep, bluestore_min_alloc_size and your calculation sounds very reasonable to me :)

Am 29.03.2019 um 23:56 schrieb Paul Emmerich:
Are you running on HDDs? The minimum allocation size is 64kb by
default here. You can control that via the parameter
bluestore_min_alloc_size during OSD creation.
64 kb times 8 million files is 512 GB which is the amount of usable
space you reported before running the test, so that seems to add up.

My test cluster is virtualized on vSphere, but the OSDs are reported as HDDs. And our production cluster also uses HDDs only. All OSDs use the default value for bluestore_min_alloc_size.

If we should really consider tinkering with bluestore_min_alloc_size: As this is probably not tunable afterwards, we would need to replace all OSDs in a rolling update. Should we expect any problems while we have OSDs with mixed min_alloc_sizes?

There's also some metadata overhead etc. You might want to consider
enabling inline data in cephfs to handle small files in a
store-efficient way (note that this feature is officially marked as
experimental, though).
http://docs.ceph.com/docs/master/cephfs/experimental-features/#inline-data

I'll give it a try on my test cluster.

--
Jörn Clausen
Daten- und Rechenzentrum
GEOMAR Helmholtz-Zentrum für Ozeanforschung Kiel
Düsternbrookerweg 20
24105 Kiel

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux