Re: Dramatic performance drop at certain number of objects in pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/27/2016 03:12 AM, Blair Bethwaite wrote:
On 25 Jun 2016 6:02 PM, "Kyle Bader" <kyle.bader@xxxxxxxxx
<mailto:kyle.bader@xxxxxxxxx>> wrote:
fdatasync takes longer when you have more inodes in the slab caches,
it's the double edged sword of vfs_cache_pressure.

That's a bit sad when, iiuc, it's only journals doing fdatasync in the
Ceph write path. I'd have expected the vfs to handle this on a per fs
basis (and a journal filesystem would have very little in the inode cache).

It's somewhat annoying there isn't a way to favor dentries (and perhaps
dentry inodes) over other inodes in the vfs cache. Our experience shows
that it's dentry misses that cause the major performance issues (makes
sense when you consider the osd is storing all its data in the leafs of
the on disk PG structure).

This is another discussion that seems to backup the choice to implement
bluestore.

Indeed.

Mark


Cheers,
Blair



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux