Re: Concurrent access to Ceph filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This doesn't sound quite right but I'm not sure if the problem is a
terminology disconnect or a concept one. Let's go through them

On Fri, Mar 1, 2013 at 3:08 PM, McNamara, Bradley
<Bradley.McNamara@xxxxxxxxxxx> wrote:
> I'm new, too, and I guess I just need a little clarification on Greg's statement.  The RBD filesystem
> is mounted to multiple VM servers
By "RBD filesystem" do you mean an RBD image of probably between 10GB
and 2TB? Or do you mean that each VM server is talking to the Ceph
cluster? The former definition would be a "no, don't do that", the
second is "of course".

>, say, in a Proxmox cluster, and as long as any one VM image file on that filesystem is only being accessed from one node of the cluster,
> everything will work, and that's the way shared storage is intended to work within Ceph/RBD.  Correct?
>
> I can understand things blowing up if the same VM image file is being accessed from multiple nodes in the cluster, and that's where a clustered filesystem comes into play.

By "VM image file" do you mean an rbd image, or are you actually
talking about a single RBD volume, formatted to use ext4, hosting 10
different VM images?

> I guess in my mind/world I was envisioning a group of VM servers using one large RBD volume, that is mounted to each VM server in the group, to store the VM images for all the VM's in the group of VM servers.  This way the VM's could migrate to any VM server in the group using the RBD volume.

So an "RBD volume" in the Ceph project is equivalent to a single
logical SATA disk. Those volumes are stored in "RADOS pools", which
for the purpose of this conversation is the collection of physical
hardware you are aggregating into a Ceph cluster. You can mount the
RBD volume from however many servers you like at a time, but if you do
so you'd better be careful about cache flushes etc! A more likely
management setup would be to mount each volume to one server at a
time, and then if you need to migrate the VM to a different host you
attach it on-demand.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux