Re: Mounting a shared block device on multiple hosts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jon,

What exactly does it mean when you say CephFS is not "production ready"? To me, this typically indicates a product that still has business crippling bugs.

Exactly - it means that you could and probably will stumble upon bugs while running CephFS. I.e. if you haven't got access to someone that can help you when you run into these bugs, then you shouldn't use it for critical business functions.

to accomplish with Ceph is a centralized storage space for my Hypervisors in addition to providing storage for the virtual machines

For this you do not need CephFS at all. You can use Ceph and its RBD (Rados Block Device).

and and any other physical machines in the cluster as they need it. The way OpenNebula works is (assume /var/lib/one is ~) it creates virtual

Actually OpenNebula can work in other ways.

RBDs in Ceph are block devices that are backed by the Ceph Storage. A block device is similar in function to a regular, raw harddrive - except that is available via the network and backed by a Ceph system.

You can do with RBDs what you would do with an ordinary harddrive. For example you can mount it, partition it and format it with a file system.

Because it is backed by Ceph you can access the RBD from any server connected to the network Ceph is connected to.

In your case this means that you can choose to power up a virtual machine on any of your servers, and it will be able to find and use its virtual machine image on the Ceph storage.

Your earlier postings seemed to indicate that you wanted multiple servers using the same image at the same time. This introduces an extra layer of complexity - but it doesn't seem to me now that this is something you need at all.

You can power up a virtual machine backed by Ceph storage on one server and live migrate it to a different server. This will work just fine as KVM will coordinate the handover between the two servers so that they won't use the RBD at the same time in a conflicting way.

OpenNebula version 4.0 has added extra support for Ceph. It is currently out in a release candidate. As you're just starting to build your system, it would probably be a good idea to test with this release candidate and upgrade to 4.0 when the final release is made.

That will save you a lot of time trying to customize OpenNebula for Ceph, as that work has already been done by the OpenNebula team.

--
Jens Kristian Søgaard, Mermaid Consulting ApS,
jens@xxxxxxxxxxxxxxxxxxxx,
http://www.mermaidconsulting.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux