On 08/10/2012 03:33 AM, Curtis C. wrote:
Hi All, My workplace is considering deploying Ceph for OpenStack block storage to try to provide volume and vm migration.
Disclaimer: I have never worked with OpenStack, only with CloudStack.
Immediately I decided, perhaps wrongly, that this meant a dedicated Ceph cluster of physical servers. But a colleague suggested that perhaps it would be better to run Ceph on the compute nodes instead of a distinct, separate cluster.
I think it's common to run Ceph on dedicated hardware, but it is not mandatory.
There is however a situation where running Kernel-RBD or the Ceph filesystem on an OSD leads into a lock situation (still valid?), but with Qemu RBD this shouldn't be a problem.
Is one of these a more appropriate solution (in a perfect world)? Or is this a half-dozen one way, six the other type situation? I can definitely understand trying to make compute and storage generic and plentiful by keeping them on the same node. But I also feel like it's ok to split things out sometimes too.
You should be able to run the OSD daemons on compute nodes, but you will need a cluster of monitors (3 recommended), I'd not run those on the compute nodes.
Be aware however that OSDs can eat a lot of memory and CPU. They sometimes eat so much memory that they can press a machine out of memory. Won't be nice if it took a couple of your instances with it.
So, technically you can run the OSDs on Compute nodes, but in terms of stability I wouldn't recommend it.
Wido
Any thoughts, suggestions, or reference architecture pointers? Thanks for any help, Curtis. -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
-- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html