CephFS and many small files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

In my ongoing quest to wrap my head around Ceph, I created a CephFS (data and metadata pool with replicated size 3, 128 pgs each). When I mount it on my test client, I see a usable space of ~500 GB, which I guess is okay for the raw capacity of 1.6 TiB I have in my OSDs.

I run bonnie with

-s 0G -n 20480:1k:1:8192

i.e. I should end up with ~20 million files, each file 1k in size maximum. After about 8 million files (about 4.7 GBytes of actual use), my cluster runs out of space.

Is there something like a "block size" in CephFS? I've read

http://docs.ceph.com/docs/master/cephfs/file-layouts/

and thought maybe object_size is something I can tune, but I only get

$ setfattr -n ceph.dir.layout.object_size -v 524288 bonnie
setfattr: bonnie: Invalid argument

Is this even the right approach? Or are "CephFS" and "many small files" such opposing concepts that it is simply not worth the effort?

--
Jörn Clausen
Daten- und Rechenzentrum
GEOMAR Helmholtz-Zentrum für Ozeanforschung Kiel
Düsternbrookerweg 20
24105 Kiel



Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux