Hi all, I've been doing quite a bit of research and planning for a new virtual computing cluster that my company is building for their production infrastructure. We're looking to use OpenStack to manage the virtual machines across a small cluster of nodes. Currently we're looking at having 3 storage nodes each with 2 3TB drives running dual Gigabit, 3 management nodes with dual 10GbE network cards and about 16 (or more) compute nodes. The plan is that the management nodes will provide services such as Rados Gateway for object storage and volume management with Cinder. The reasoning; using rbd means that we have redundancy of volumes and images, something plain LVM (which Cinder would otherwise use) can't provide on its own. What I'm not clear on, is just where each daemon can exist. ceph-ods will obviously be running on each of the storage nodes. I'd imagine ceph-mon will run on the management nodes. However I read in the documentation: > We recommend using at least two hosts, and a recent Linux kernel. In > older kernels, Ceph can deadlock if you try to mount CephFS or RBD > client services on the same host that runs your test Ceph cluster. I presume that the actual rbd client for volume/block storage would in-fact be running on the compute node and thus be separate from the ods and mon daemons. The one I'm not clear on is the ceph-gateway. Does this operate as a client, and thus should run separate from ceph-mon/ceph-ods? Or can it reside on one of the monitor hosts? Do the clients potentially deadlock with ceph-mon, ceph-ods or both? Apologies if this is covered somewhere, I have been looking on-and-off over the last few months but haven't spotted much on the topic. Regards, -- ## -,-''''-. ###### Stuart Longland, Software Engineer ##. : ## : ## 38b Douglas Street ## # ## -'` .#' Milton, QLD, 4064 '#' *' '-. *' http://www.vrt.com.au S Y S T E M S T: 07 3535 9619 F: 07 3535 9699 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com