Re: can I attach a volume to 2 servers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mapping a single RBD on multiple servers isn’t going to do what you want unless you’re putting some kind of clustered filesystem on it.  Exporting the filesystem via an NFS server will generally be simpler.

 

You’ve already encountered one problem with sharing a block device without a clustered filesystem:  One server doesn’t know when some other server has changed something, so a given server will only show changes it has made unless you somehow refresh the server’s knowledge (with a remount, for example).

 

A related but much bigger problem comes with multiple servers are writing to the same block device.  Because no server is aware of what the other servers are doing, it’s essentially guaranteed that you’ll have one server partially overwriting things another server just wrote, resulting in lost data and/or a broken filesystem.

 

-----

Edward Huyer

School of Interactive Games and Media

Golisano 70-2373

152 Lomb Memorial Drive

Rochester, NY 14623

585-475-6651

erhvks@xxxxxxx

 

Obligatory Legalese:

The information transmitted, including attachments, is intended only for the person(s) or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and destroy any copies of this information.

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of yang sheng
Sent: Monday, May 02, 2016 9:47 AM
To: Sean Redmond <sean.redmond1@xxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] can I attach a volume to 2 servers

 

hi Sean

 

thanks for your reply

 

I think the ceph and openstack works fine for me. I can attach a bootable volume to vm.

 

I am right now trying to attach the volumes to physical servers (hypervisor nodes) and share some data among hypervisors (based on the docs, nova evacuate function require all hypervisors share instance files). 

 

In the doc, they are using a NFS cluster. I am wondering if i can use the ceph volume instead of NFS.

 

(I have created a volume and attached the volume to 2 hypervisors (A and B). but when I write something in server A, the server B couldn't see the file. I have de-attach and re-attach the volume on server B. )

 

 

 

On Mon, May 2, 2016 at 9:34 AM, Sean Redmond <sean.redmond1@xxxxxxxxx> wrote:

Hi,

 

You could set the below to create ephemeral disks as RBD's

 

[libvirt]

libvirt_images_type = rbd

 

On Mon, May 2, 2016 at 2:28 PM, yang sheng <forsaks.30@xxxxxxxxx> wrote:

Hi 

 

I am using ceph infernalis.

 

it works fine with my openstack liberty.

 

I am trying to test nova evacuate.

 

All the vms' volumes are shared among all compute nodes. however, the instance files (/var/lib/nova/instances) are in each compute node's local storage.

 

Based on redhat docs(https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/6/html/Administration_Guide/section-evacuation.html), nova evacuate require sync the instance files as well and they created a NFS cluster.

 

Since ceph is also shared among all nodes as well, I was thinking to create a volume in ceph and attach this volume to all compute nodes. 

 

just wondering is this doable?

 

(I have already attached this volume to 2 servers, server A and server B. If I write something in server A, seems it is not visible to server B. I have to re-attach the volume to server B so that server B can see it.)

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux