Are you running on HDDs? The minimum allocation size is 64kb by default here. You can control that via the parameter bluestore_min_alloc_size during OSD creation. 64 kb times 8 million files is 512 GB which is the amount of usable space you reported before running the test, so that seems to add up. There's also some metadata overhead etc. You might want to consider enabling inline data in cephfs to handle small files in a store-efficient way (note that this feature is officially marked as experimental, though). http://docs.ceph.com/docs/master/cephfs/experimental-features/#inline-data Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Fri, Mar 29, 2019 at 1:20 PM Clausen, Jörn <jclausen@xxxxxxxxx> wrote: > > Hi! > > In my ongoing quest to wrap my head around Ceph, I created a CephFS > (data and metadata pool with replicated size 3, 128 pgs each). When I > mount it on my test client, I see a usable space of ~500 GB, which I > guess is okay for the raw capacity of 1.6 TiB I have in my OSDs. > > I run bonnie with > > -s 0G -n 20480:1k:1:8192 > > i.e. I should end up with ~20 million files, each file 1k in size > maximum. After about 8 million files (about 4.7 GBytes of actual use), > my cluster runs out of space. > > Is there something like a "block size" in CephFS? I've read > > http://docs.ceph.com/docs/master/cephfs/file-layouts/ > > and thought maybe object_size is something I can tune, but I only get > > $ setfattr -n ceph.dir.layout.object_size -v 524288 bonnie > setfattr: bonnie: Invalid argument > > Is this even the right approach? Or are "CephFS" and "many small files" > such opposing concepts that it is simply not worth the effort? > > -- > Jörn Clausen > Daten- und Rechenzentrum > GEOMAR Helmholtz-Zentrum für Ozeanforschung Kiel > Düsternbrookerweg 20 > 24105 Kiel > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com