Re: CephFS billions of files and inline_data?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 16, 2017 at 3:27 PM, Henrik Korkuc <lists@xxxxxxxxx> wrote:
> Hello,
>
> I have use case for billions of small files (~1KB) on CephFS and as to my
> experience having billions of objects in a pool is not very good idea (ops
> slow down, large memory usage, etc) I decided to test CephFS inline_data.
> After activating this feature and starting copy process I noticed that
> objects are still created on data pool, but their size is 0. Is this
> expected behavior? Maybe someone can share tips on using large amount of
> small objects? I am on 12.1.3, already using decreased min block size for
> bluestore.

Couple of thoughts:
 - Frequently when someone has a "billions of small files" workload
they really want an object store, not a filesystem
 - In many cases the major per-file overhead is MDS CPU req/s rather
than the OSD ops, so inline data may be efficient but not result in
overall speedup
 - If you do need to get rid of the overhead of writing objects to the
data pool, you could work on creating a special backtraceless flag
(per-filesystem), where the filesystem cannot do lookups by inode (no
NFS, no hardlinks, limited disaster recovery), but it doesn't write
backtraces either.

John

>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux