Re: CephFS and many small files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I haven't had any issues either with 4k allocation size in cluster holding 358M objects for 116TB (237TB raw) and 2.264B chunks/replicas.

This is an average of 324k per object and 12.6M of chunks/replicas per OSD with RocksDB sizes going from 12.1GB to 21.14GB depending on how much PGs the OSDs have. RocksDB sizes will lower as we add more OSDs to the cluster by the end of this year.

We've seen a huge latency improvement by moving OSDs to Bluestore. Filestore (XFS) wouldn't operate well anymore with over 10M of files, with a negligible fragmentation factor and 8/40 split/merge thresholds.

Frédéric.

Le 01/04/2019 à 14:47, Sergey Malinin a écrit :
I haven't had any issues with 4k allocation size in cluster holding 189M files.

April 1, 2019 2:04 PM, "Paul Emmerich" <paul.emmerich@xxxxxxxx> wrote:

I'm not sure about the real-world impacts of a lower min alloc size or
the rationale behind the default values for HDDs (64) and SSDs (16kb).

Paul
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Attachment: smime.p7s
Description: Signature cryptographique S/MIME

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux