Re: Ceph and open source cloud software: Path of least resistance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jens,

On 6/17/13 05:02 AM, Jens Kristian Søgaard wrote:
Hi Stratos,

you might want to take a look at Synnefo. [1]

I did take a look at it earlier, but decided not to test it.

Mainly I was deterred because I found the documentation a bit lacking. I opened up the section on File Storage and found that there were only chapter titles, but no actual content. Perhaps I was too quick to dismiss it.


Thanks for your interest in our work with Synnefo.

It seems you are referring to the empty sections of the Administrator's
Guide. If yes, then what you're saying is true: The project is in very
active development, so we are mostly focusing on the Installation Guide
right now, which we always try to keep updated with the latest commits:

http://www.synnefo.org/docs/synnefo/latest/quick-install-admin-guide.html

Perhaps you were a bit too quick to dismiss it
If you start playing around with Ganeti for VM management, I think
you'll love its simplicity and reliability. Then, Synnefo is a nice way
of providing cloud interfaces on top of Ganeti VMs, and also adding the
cloud storage part.

A bit more practical problem for me was that my test equipment consists of a single server (besides the Ceph cluster). As far as I understood the docs, there was a bug that makes it impossible to run Synnefo on a single server (to be fixed in the next version)?


This has been completely rehauled in Synnefo 0.14, which will be out by
next week, allowing any combination of components to coexist on a single
node, with arbitrary setting of URL prefixes for each. If you're feeling
adventurous, please find 0.14~rc4 packages for Squeeze at apt.dev.grnet.gr,
we've also uploaded the latest version of the docs at http://docs.synnefo.org.

Regarding my goals, I read through the installation guide and it recommends setting up an NFS server on one of the servers to serve images to the rest. This is what I wanted to avoid. Is that optional and/or could be replaced with Ceph?


We have integrated the storage service ("Pithos") with the compute
service, as the Image repository. Pithos has pluggable storage drivers,
through which it stores files as collections of content-addressable blocks.
One driver uses NFS, storing objects as distinct files on a shared directory,
another uses RADOS, storing objects as RADOS objects. Our production used
to run on NFS, and we're now transitioning to using RADOS exclusively.
Currently, we use both drivers simultaneously: Incoming file chunks are stored
both in RADOS and in the NFS share. Eventually, we'll just unplug the NFS
driver when we're ready to go RADOS-only.

In your case, you can start with Pithos being RADOS-only, although the
Installation Guide continues to refer to NFS for simplicity.

At the moment Ganeti only supports the in-kernel RBD driver, although
support for the qemu-rbd driver should be implemented soon. Using the

Hmm, I wanted to avoid using the in-kernel RBD driver, as I figured it lead to various problems. Is it not a problem in practice?


Our demo installation at http://www.synnefo.org ["Try it out"] uses the
in-kernel RBD driver for the "rbd" storage option. We haven't encountered
any significant problems. Furthermore, AFAIK, Ganeti will also support
choosing between the in-kernel or qemu-rbd userspace driver when
spawning a VM in one of its next versions, so Synnefo will then also support
that, out-of-the-box.

I was thinking it would be wisest to stay with the distribution kernel, but I guess you swap it out for a later version?


For our custom storage layer (Archipelago, see below) we require a newer
kernel than the one that comes with Squeeze, so we run 3.2 from
squeeze-backports, everything has been going smoothly so far.

The rbds for all my existing VMs would probably have to be converted back from format 2 to format 1, right?


If you plan to use the in-kernel rbd driver, it seems yes:
http://ceph.com/docs/next/man/8/rbd/#parameters

I can't comment on this because we only run rbd as an option in the demo
environment, with the in-kernel driver. For our production, we're running
a custom storage layer (Archipelago) which does thin provisioning of volumes
from Pithos files and accesses the underlying Pithos objects directly, no matter
which driver (RADOS or NFS) you use.

Thanks again for your interest,
Constantinos


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux