Re: Concurrency in ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes Openstack also uses libvirt/qemu/kvm, thanks.

On 18 Nov 2014 23:50, "Campbell, Bill" <bcampbell@xxxxxxxxxxxxxxxxxxxx> wrote:
I can't speak for OpenStack, but OpenNebula uses Libvirt/QEMU/KVM to access an RBD directly for each virtual instance deployed, live-migration included (as each RBD is in and of itself a separate block device, not file system).  I would imagine OpenStack works in a similar fashion.


From: "hp cre" <hpcre1@xxxxxxxxx>
To: "Gregory Farnum" <greg@xxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Sent: Tuesday, November 18, 2014 4:43:07 PM
Subject: Re: Concurrency in ceph

Ok thanks Greg.
But what openstack does,  AFAIU, is use rbd devices directly,  one for each Vm instance,  right?  And that's how it supports live migrations on KVM, etc.. Right? Openstack and similar cloud frameworks don't need to create vm instances on filesystems,  am I correct?

On 18 Nov 2014 23:33, "Gregory Farnum" <greg@xxxxxxxxxxx> wrote:
On Tue, Nov 18, 2014 at 1:26 PM, hp cre <hpcre1@xxxxxxxxx> wrote:
> Hello everyone,
>
> I'm new to ceph but been working with proprietary clustered filesystem for
> quite some time.
>
> I almost understand how ceph works,  but have a couple of questions which
> have been asked before here,  but i didn't understand the answer.
>
> In the closed source world,  we use clustered filesystems like Veritas
> clustered filesystem to mount a shared block device (using San) to more than
> one compute node concurrently for shared read/write.
>
> What I can't seem to get a solid and clear answer for its this..
> How can I use ceph to do the same thing?  Can RADOS guarantee coherency and
> integrity of my data if I use an rbd device with any filesystem on top of
> it?  Or must I still use a cluster aware filesystem such as vxfs or ocfs?

RBD behaves just like a regular disk if you mount it to multiple nodes
at once (although you need to disable the client caching). This means
that the disk accesses will be coherent, but using ext4 on top of it
won't work because ext4 assumes it is the only accessor — you have to
use a cluster-aware FS like ocfs2. A SAN would have the same problem
here, so I'm not sure why you think it works with them...


> And is CephFS going to some this problem? Or does it not have support for
> concurrent read/write access among all now mounting it?

CephFS definitely does support concurrent access to the same data.

> And,  does iscsi targets over rbd devices behave the same?

Uh, yes, iSCSI over rbd will be the same as regular RBD in this
regard, modulo anything the iSCSI gateway might be set up to do.
-Greg

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux