Hi John, thanks for your reply... On 17/04/13 06:45, John Wilkins wrote: > It's covered here too: > http://ceph.com/docs/master/faq/#how-can-i-give-ceph-a-try Yes I did see that. There used to be a big fat warning in the quick-start guides which had me rather worried. What I was curious about is which exact bits of Ceph interacted? That way, I can architecture things to keep them apart. rbd and cephfs (unless you use FUSE) do live in the kernel, but I'm not sure about ceph-mon and ceph-ods. > The issue with trying Ceph out on only one machine is that if you > have ceph-mon and ceph-osd daemons running on a host, you really > shouldn't try to mount rbd as a kernel object or CephFS as a kernel > object on the same host. It's not related to Ceph, but rather to the > Linux kernel itself. You'd never do this in production. The > admonishment only applies to the quick start. > You can run ceph-gateway on the same host as the OSDs. It's only > kernel mounted clients on older versions of the kernel that have the > potential to deadlock. Ahh okay. How about ceph-gateway on the same host as ceph-mon? Does this code rely on any in-kernel components or is it entirely userspace? I was thinking the back-end storage nodes would run ceph-osd only, with ceph-mon, ceph-mds and ceph-gateway running on the management nodes which will (in future) have 10GbE. rbd and cephfs will be on compute nodes (in fact, I read kvm has its own userspace rbd client built-in, I suspect OpenStack will use that) and thus safely out of the way. If there's the potential for ceph-gateway to interact, then that might suggest we use OpenStack Swift in parallel with Ceph to provide object storage of images -- allocating partitions for each. This would be less preferable as it complicates the set-up slightly. Having Ceph completely manage storage seems the preferable option. Regards, -- ## -,-''''-. ###### Stuart Longland, Software Engineer ##. : ## : ## 38b Douglas Street ## # ## -'` .#' Milton, QLD, 4064 '#' *' '-. *' http://www.vrt.com.au S Y S T E M S T: 07 3535 9619 F: 07 3535 9699 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com