Re: Ceph and open source cloud software: Path of least resistance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/16/2013 08:48 PM, Jens Kristian Søgaard wrote:
> Hi guys,
>
> I'm looking to setup an open source cloud IaaS system that will work
> well together with Ceph. I'm looking for a system that will handle
> running KVM virtual servers with persistent storage on a number of
> physical servers with a multi-tenant dashboard.
>
> I have now tried a number of systems, but having difficulties finding
> something that will work with Ceph in an optimal way. Or at least,
> having difficulties finding hints on how to achieve that.
>
> By optimal I mean:
>
> a) To have Ceph as the only storage, so that I don't have a NFS SPoF
> nor have to wait for images to be copied from server to server.
>
> b) To run KVM with the async flush feature in 1.4.2 (or backported)
> and with the librbd cache.
>
>
> Any of you guys are doing this? - have hints to offer?
>
> I have tried CloudStack, but found that it was not possible to rely
> fully on Ceph storage. I learnt that it would potentially be possible
> with the upcoming 4.2 release, so I tried installed CloudStack from
> the development source code tree. I wasn't able to get this working
> because of various bugs (to be expected when running a development
> version ofcourse).
>
> I also tried OpenNebula, but found that it was very hard to get
> working on the recommended CentOS 6.4 distribution. By upgrading all
> sorts of systems and manually patching parts of the system I was able
> to get it "almost working". However in the end, I ended up in a
> dilemma where OpenNebula needed a newer qemu version to support RBDs
> and that newer qemu didn't work well with the older libvirt. On the
> other hand if I upgraded libvirt, I couldn't get it to work with the
> older qemu versions with backported RBD support, as the newer libvirt
> where setting an auth_supported=none option that stopped it from
> working. It didn't seem possible to convince OpenNebula to store a
> secret for Ceph with libvirt.
>
> I have been looking at OpenStack, but by reading the documentation and
> googling it seems that it is not possible to configure OpenStack to
> use the librbd cache with Ceph. Could this be right?
>
> Or is it merely the case, that you cannot configure it on a per-VM
> basis, so that you have to rely on the default settings in ceph.conf?
> (which wouldn't be a problem for me)
>
> Any advice you could give would be greatly appreciated!
>
> Thanks,

Hi,

you might want to take a look at Synnefo. [1]

Synnefo is a complete open source cloud IaaS platform, which uses Google
Ganeti [2] for the VM cluster management at the backend and implements /
exposes OpenStack APIs at the frontend. Synnefo supports Ceph / RBD on
the API layer, as a 'disk template' when creating VMs, and passes that
information to Ganeti, which actually does the RBD device handling.

At the moment Ganeti only supports the in-kernel RBD driver, although
support for the qemu-rbd driver should be implemented soon. Using the
in-kernel RBD driver means that you should probably run a relatively
modern kernel, but it also means that caching and flushing is handled by
the kernel mechanisms (page cache, block layer etc), without the need to
rely on specific qemu / libvirt versions to support them. Ganeti does
*not* use libvirt in the backend and supports out-of-the-box both KVM
and Xen.

You can also read this blog post [3] for more information, to see how we
use Synnefo + Ganeti + Ceph to power a large scale public cloud service.

[1] http://www.synnefo.org
[2] https://code.google.com/p/ganeti/
[3]
http://synnefo-software.blogspot.gr/2013/02/we-are-happy-to-announce-that-synnefo_11.html

Thanks,
Stratos

-- 
Stratos Psomadakis
<s.psomadakis@xxxxxxxxx>


Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux