On Thu, 12 Apr 2012, Martin Mailand wrote: > Hi, > > today I tried the wip-librbd-caching branch. The performance improvement is > very good particular for small writes. > I tested from within a vm with fio: > > rbd_cache_enabled=1 > > fio -name iops -rw=write -size=10G -iodepth 1 -filename /tmp/bigfile -ioengine > libaio -direct 1 -bs 4k > > I get over 10k iops > > With an iodepth 4 I get over 30k iops > > In comparison with the rbd_writebackwindow I get around 5k iops with an > iodepth of 1. > > So far the whole cluster is running stable for over 12 hours. Great to hear! > But there is also a downside. > My typical vm are 1Gb in size, the default cache size is 200Mb, which is 20% > more memory usage. Maybe 50Mb or less will be enough? > I am going to test that. The config options you'll want to look at are client_oc_* (in case you didn't see that already :). "oc" is short for objectcacher, and it isn't only used for client (libcephfs), so it might be worth renaming these options before people start using them. > The other point is, that the cache is not KSM enabled, therefore identical > pages will not be merged, could that be changed, what would be the downside? > > So maybe we could reduce the memory footprint of the cache, but keep it's > performance. I'm not familiar with the performance implications of KSM, but the objectcacher doesn't modify existing buffers in place, so I suspect it's a good candidate. And it looks like there's minimal effort in enabling it... sage -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html