Re: inline_data (was: CephFS and many small files)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 2, 2019 at 5:24 AM Clausen, Jörn <jclausen@xxxxxxxxx> wrote:
>
> Hi!
>
> Am 29.03.2019 um 23:56 schrieb Paul Emmerich:
> > There's also some metadata overhead etc. You might want to consider
> > enabling inline data in cephfs to handle small files in a
> > store-efficient way (note that this feature is officially marked as
> > experimental, though).
> > http://docs.ceph.com/docs/master/cephfs/experimental-features/#inline-data
>
> Is there something missing from the documentation? I have turned on this
> feature:
>
> $ ceph fs dump | grep inline_data
> dumped fsmap epoch 1224
> inline_data     enabled
>
> I have reduced the size of the bonnie-generated files to 1 byte. But
> this is the situation halfway into the test: (output slightly shortened)
>
> $ rados df
> POOL_NAME      USED OBJECTS CLONES   COPIES
> fs-data     3.2 MiB 3390041      0 10170123
> fs-metadata 772 MiB    2249      0     6747
>
> total_objects    3392290
> total_used       643 GiB
> total_avail      957 GiB
> total_space      1.6 TiB
>
> i.e. bonnie has created a little over 3 million files, for which the
> same number of objects was created in the data pool. So the raw usage is
> again at more than 500 GB.

Even for inline files, there is one object created in the data pool to
hold backtrace information (an xattr of the object) used for hard
links and disaster recovery.

-- 
Patrick Donnelly
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux