I am. For our workloads it works fine. The biggest trick I found is to make sure that Nova leaves enough free RAM to not starve the OSDs. In my case, each node is running three OSDs, so in my nova.cfg I added "reserved_host_memory_mb = 3072" to help ensure that. Each node has 72GB of RAM, so there's plenty left for VMs.
If I had it to do all over again I would have pushed for enough budget to split up the cluster and get ceph-dedicated storage nodes that can support 20-ish disks or so. Not because of any problems we've had with having the two cohosted on the nodes, but because the number of VMs we can run is limited by IOPS, so I need more spindles. I've found that when running three replicas in the ceph pool, it's a good rule of thumb to assume 1 disk per VM to get good consistent performance once you've eliminated other bottlenecks. So in my case, I have 42 OSD disks, so I can run at most 42 VMs before performance starts to get weird.
QH
2015-10-26 4:17 GMT-06:00 Stolte, Felix <f.stolte@xxxxxxxxxxxxx>:
Hi all,
is anyone running nova compute on ceph OSD Servers and could share his experience?
Thanks and Regards,
Felix
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com