Re: Concurrent access to Ceph filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mar 1, 2013, at 6:08 PM, "McNamara, Bradley" <Bradley.McNamara@xxxxxxxxxxx> wrote:

> I'm new, too, and I guess I just need a little clarification on Greg's statement.  The RBD filesystem is mounted to multiple VM servers, say, in a Proxmox cluster, and as long as any one VM image file on that filesystem is only being accessed from one node of the cluster, everything will work, and that's the way shared storage is intended to work within Ceph/RBD.  Correct?
> 

Technically it's a Rados Block Device and not a filesystem.  I might suggest libvirt and sanlock to ease your mind about vm's fighting over the same disk.  Here is the way to think of it, let's say you have a hard drive, you want to access a sector, the information you need to access that is the bus, address on that bus, and sector number.  The analog for rbd be pool, image name, object number.  The rbd kernel driver and qemu driver combine the identifying information and translate between traditional ata or scsi and the collection of objects that make up a rbd image.  

> I can understand things blowing up if the same VM image file is being accessed from multiple nodes in the cluster, and that's where a clustered filesystem comes into play.
> 

Ideally all nodes talk to all nodes, if they don't then your cluster isn't balanced and functioning properly.

> I guess in my mind/world I was envisioning a group of VM servers using one large RBD volume, that is mounted to each VM server in the group, to store the VM images for all the VM's in the group of VM servers.  This way the VM's could migrate to any VM server in the group using the RBD volume.
> 
> No?
> 

No, one rbd device per vm.  I think what you are looking for is perhaps a ceph storage pool not a rbd "volume".

> Brad
> 
> -----Original Message-----
> From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Gregory Farnum
> Sent: Friday, March 01, 2013 2:13 PM
> To: Karsten Becker
> Cc: ceph-users@xxxxxxxx
> Subject: Re:  Concurrent access to Ceph filesystems
> 
> On Fri, Mar 1, 2013 at 1:53 PM, Karsten Becker <karsten.becker@xxxxxxxxxxx> wrote:
>> Hi,
>> 
>> I'm new to Ceph. I currently find no answer in the official docs for 
>> the following question.
>> 
>> Can Ceph filesystems be used concurrently by clients, both when 
>> accessing via RBD and CephFS? Concurrently means in terms of multiple 
>> clients accessing an writing on the same Ceph volume (like it is 
>> possible with OCFS2) and extremely, in the same file at the same time.
>> Or is Ceph a "plain" distributed filesystem?
> 
> CephFS supports this very nicely, though it is of course not yet production ready for most users. RBD provides block device semantics - you can mount it from multiple hosts, but if you aren't using cluster-aware software on top of it you won't like the results (eg, you could run OCFS2 on top of RBD, but running ext4 on top of it will work precisely as well as doing so would with a regular hard drive that you somehow managed to plug into two systems at once).
> -Greg
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux