On 03/10/2014 08:18 AM, Xavier Trilla wrote:
Hi!
I’ve been checking some of the available information about Cache pools
and I’ve come out with some questions:
-What do you think is a better approach to improve the performance of
RBD for VMs: Caching OSDs with FlashCache or using SSD Cache Pools?
As cache pools aren't even out yet, and few people have really dug into
flashcache backed OSDs, it's kind of tough to say. :) If you do any
testing, we'd leave to hear your results!
-As I understand kernel driver will not be able to mount RBD images
from cached pools until it gets upgraded, am I right? (Kernel 3.14
probably?)
-As QEMU / KVM does use librbd, if I upgrade Ceph client packages on
Hypervisor host will VMs be able to mount RBD images from cached pools?
(As I understand QEMU / KVM link dynamically to librbd libraries, right?)
Actually, I plan to perform some tests by myself soon, but if someone
could give me some insight before I get my hands over proper HW to run
some tests it would be really great.
I think the only insight I can give right now is that flashcache is
going to cache things at the block layer while a cache pool will cache
things at the object layer. There are potentially some advantages and
disadvantages in each case. I haven't really thought through this well
yet, but here are some initial guesses:
flashcache:
+ All caching is local to a node, much less overhead (including network!).
+ May do better in situations with many object accesses and lots of hot
inodes/dentries?
- No ability to define different replication/EC policy for cache only.
- cache writes are non-atomic?
Ceph Cache Tier:
+ Potentially can do things like replication for cache and erasure
coding for cold data.
+ Maybe safer?
- More network overhead, potentially more CPU overhead.
- Takes longer to get things into cache?
Thanks!
Saludos cordiales,
Xavier Trilla P.
Silicon Hosting <https://siliconhosting.com/>
¿Todavía no conoces Bare Metal Cloud?
¡La evolución de los Servidores VPS ya ha llegado!
más información en: siliconhosting.com/cloud
<http://www.silicontower.net/cloud>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com