Introductions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm Zach Hill, the storage lead at Eucalyptus <http://www.eucalyptus.com>.
We're working on adding Ceph RBD support for our scale-out block storage
(EBS API). Things are going well, and we've been happy with Ceph thus far.
We are a RHEL/CentOS shop mostly, so any other tips there would be greatly
appreciated.

Our basic architecture is that we have a storage control node that issues
control-plan operations: create image, delete, snapshot, etc. This
controller uses librbd directly via JNA bindings. VMs access the Ceph RBD
Images as block devices exposed via the Qemu/KVM RBD driver on our "Node
Controller" hosts. It's similar to OpenStack Cinder in many ways.

One of the questions we often get is:
Can I run OSDs on my servers that also host VMs?

Generally, we recommend strongly against such a deployment in order to
ensure performance and failure isolation between the compute and storage
sides of the system. But, I'm curious if anyone is doing this in practice
and if they've found reasonable ways to make it work in production.

Thanks for any info in advance, and we're happy to be joining this
community in a more active way.

-Zach
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140808/ea6da7f9/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux