I've been trying to do the exact same thing, and did a lot of tests. Be aware that you need a decent amount of monitor daemons to ensure high availability. Basically this means, if you want to run ceph on two nodes, you will still need three monitor daemons to ensure that one of the monitor-hosts can fail. Two out of three failed monitor-hosts means no more i/o on you ceph cluster. Other than that the rbd / qemu-integration worked very well in all my tests. I really love ceph, but I don't know if it's suitable for small installations regarding HA. Wolfgang On 02/14/2013 04:35 PM, Bram Vandoren wrote: > Hi, > we want to set up storage infrastructure for a few virtual machines > (5-10). Our main concern is high availability (rather than huge > amounts of data or very high performance). We did a few test using > fedora + kvm-qemu (with rbd backend) + ceph + libvirt. So far it works > fine. We didn't found reports on the internet from people using a > similar setup. Most people choose the more common drdb+iscsi/nfs > setup. Is anyone using this kind of configuration in a production > environment and wants to share their experience? > > Thanks, > Bram > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johannes Kepler University Linz IT-Center Softwarepark 35 4232 Hagenberg Austria Phone: +43 7236 3343 245 Fax: +43 7236 3343 250 wolfgang.hennerbichler@xxxxxxxxxxxxxxxx http://www.risc-software.at _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com