Re: ceph and OpenStack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Stuart, 

It's covered here too: http://ceph.com/docs/master/faq/#how-can-i-give-ceph-a-try

That comment only applies to the quick start--e.g., someone spinning up a Ceph cluster on their laptop to try it out. One of the things we've tried to provide to the community is a way to try Ceph out on the minimum number of machines possible so that you can get a feel for how it works without building a production-worthy cluster. The issue with trying Ceph out on only one machine is that if you have ceph-mon and ceph-osd daemons running on a host, you really shouldn't try to mount rbd as a kernel object or CephFS as a kernel object on the same host. It's not related to Ceph, but rather to the Linux kernel itself. You'd never do this in production. The admonishment only applies to the quick start.

You can run ceph-gateway on the same host as the OSDs. It's only kernel mounted clients on older versions of the kernel that have the potential to deadlock. As always, with running multiple daemons on a host, be cognizant of the memory and CPU requirements of the various daemons.

Regards,


John



On Tue, Apr 16, 2013 at 12:27 AM, Stuart Longland <stuartl@xxxxxxxxxx> wrote:
Hi all,

I've been doing quite a bit of research and planning for a new virtual
computing cluster that my company is building for their production
infrastructure.

We're looking to use OpenStack to manage the virtual machines across a
small cluster of nodes.

Currently we're looking at having 3 storage nodes each with 2 3TB drives
running dual Gigabit, 3 management nodes with dual 10GbE network cards
and about 16 (or more) compute nodes.

The plan is that the management nodes will provide services such as
Rados Gateway for object storage and volume management with Cinder.  The
reasoning; using rbd means that we have redundancy of volumes and
images, something plain LVM (which Cinder would otherwise use) can't
provide on its own.

What I'm not clear on, is just where each daemon can exist.  ceph-ods
will obviously be running on each of the storage nodes.  I'd imagine
ceph-mon will run on the management nodes.  However I read in the
documentation:

> We recommend using at least two hosts, and a recent Linux kernel. In
> older kernels, Ceph can deadlock if you try to mount CephFS or RBD
> client services on the same host that runs your test Ceph cluster.

I presume that the actual rbd client for volume/block storage would
in-fact be running on the compute node and thus be separate from the ods
and mon daemons.  The one I'm not clear on is the ceph-gateway.

Does this operate as a client, and thus should run separate from
ceph-mon/ceph-ods?  Or can it reside on one of the monitor hosts?  Do
the clients potentially deadlock with ceph-mon, ceph-ods or both?

Apologies if this is covered somewhere, I have been looking on-and-off
over the last few months but haven't spotted much on the topic.

Regards,
--
##   -,-''''-. ###### Stuart Longland, Software Engineer
##.  :  ##   :   ##   38b Douglas Street
 ## #  ## -'`   .#'   Milton, QLD, 4064
 '#'  *'   '-.  *'    http://www.vrt.com.au
     S Y S T E M S    T: 07 3535 9619    F: 07 3535 9699
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
John Wilkins
Senior Technical Writer
Intank
john.wilkins@xxxxxxxxxxx
(415) 425-9599
http://inktank.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux