My understanding was Cinder is needed to create/delete/manage etc. on volumes but I/O to the volumes is direct from the hypervisors. In theory you could lose your Cinder service and VMs would stay up.
On 25 Jul 2017 4:18 a.m., "Brady Deetz" <bdeetz@xxxxxxxxx> wrote:
Thanks for pointing to some documentation. I'd seen that and it is certainly an option. From my understanding, with a Cinder deployment, you'd have the same failure domains and similar performance characteristics to an oVirt + NFS + RBD deployment. This is acceptable. But, the dream I have in my head is where the RBD images are mounted and controlled on each hypervisor instead of a central storage authority like Cinder. Does that exist for anything or is this a fundamentally flawed idea?On Mon, Jul 24, 2017 at 9:41 PM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:oVirt 3.6 added Cinder/RBD integration [1] and it looks like they are
currently working on integrating Cinder within a container to simplify
the integration [2].
[1] http://www.ovirt.org/develop/release-management/features/sto rage/cinder-integration/
[2] http://www.ovirt.org/develop/release-management/features/cin derglance-docker-integration/
Jason
On Mon, Jul 24, 2017 at 10:27 PM, Brady Deetz <bdeetz@xxxxxxxxx> wrote:
> Funny enough, I just had a call with Redhat where the OpenStack engineer was
> voicing his frustration that there wasn't any movement on RBD for oVirt.
> This is important to me because I'm building out a user-facing private cloud
> that just isn't going to be big enough to justify OpenStack and its
> administrative overhead. But, I already have 1.75PB (soon to be 2PB) of
> CephFS in production. So, it puts me in a really difficult design position.
>
> On Mon, Jul 24, 2017 at 9:09 PM, Dino Yancey <dino2gnt@xxxxxxxxx> wrote:
>>
>> I was as much as told by Redhat in a sales call that they push Gluster
>> for oVirt/RHEV and Ceph for OpenStack, and don't have any plans to
>> change that in the short term. (note this was about a year ago, i
>> think - so this isn't super current information).
>>
>> I seem to recall the hangup was that oVirt had no orchestration
>> capability for RBD comparable to OpenStack, and that CephFS wasn't
>> (yet?) viable for use as a "POSIX filesystem" oVirt storage domain.
>> Personally, I feel like Redhat is worried about competing with
>> themselves with GlusterFS versus CephFS and is choosing to focus on
>> Gluster as a filesystem, and Ceph as everything minus the filesystem.
>>
>> Which is a shame, as I'm a fan of both Ceph and oVirt and would love
>> to use my existing RHEV infrastructure to bring Ceph into my
>> environment.
>>
>>
>> On Mon, Jul 24, 2017 at 8:39 PM, Brady Deetz <bdeetz@xxxxxxxxx> wrote:
>> > I haven't seen much talk about direct integration with oVirt. Obviously
>> > it
>> > kind of comes down to oVirt being interested in participating. But, is
>> > the
>> > only hold-up getting development time toward an integration or is there
>> > some
>> > kind of friction between the dev teams?
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>>
>> --
>> ______________________________
>> Dino Yancey
>> 2GNT.com Admin
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com