Re: Why RBD is not enough [was: Inconsistent view on mounted CephFS]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Maciej Gałkiewicz writes:
> On 13 September 2013 17:12, Simon Leinen <simon.leinen@xxxxxxxxx> wrote:
>> 
>> [We're not using is *instead* of rbd, we're using it *in addition to*
>> rbd.  For example, our OpenStack (users') cinder volumes are stored in
>> rbd.]

> So you probably have cinder volumes in rbd but you boot instances from
> images. This is why you need cephfs for /var/lib/nova/instances. I
> suggest creating volumes from images and booting instances from them.
> Cephfs is not required then

Thanks, I know that we could "boot from volume".  Two problems:

1.) Our OpenStack installation is not a private cloud; we allow
    external users to set up VMs.  These users need to be able to use
    the "standard" workflow (Horizon) to start VMs from an image.

2.) We didn't manage to make boot from volume work with RBD in Folsom.
    Yes, presumably it works fine in Grizzly and above, so we should
    just upgrade.

>> What we want to achieve is to have a shared "instance store"
>> (i.e. "/var/lib/nova/instances") across all our nova-compute nodes, so
>> that we can e.g. live-migrate instances between different hosts.  And we
>> want to use Ceph for that.
>> 
>> In Folsom (but also in Grizzly, I think), this isn't straightforward to
>> do with RBD.  A feature[1] to make it more straightforward was merged in
>> Havana(-3) just two weeks ago.

> I dont get it. I am successfully using live-migration (in Grizzly,
> havent try Folsom) of instances booted from cinder volumes stored as
> rbd volumes. What is not straightforward to do? Are you using KVM?

As I said, "boot from volume" is not really an option for us.

>> Yes, people want shared storage that they can access in a POSIXly way
>> from multiple VMs.  CephFS is a relatively easy way to give them that,
>> though I don't consider it "production-ready" - mostly because secure
>> isolation between different tenants is hard to achieve.

> For now GlusterFS may fits better here.

Possibly, but it's another system we'd have to learn, configure and
support.  And CephFS is already in standard kernels (though obviously
it's not reliable, and there may be horrible regressions such as this
bug in 3.10).
-- 
Simon.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux