Mounting a shared block device on multiple hosts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I would like to mount a single RBD on multiple hosts to be able to share the block device.  
Is this possible?  I understand that it's not possible to share data between the different interfaces, e.g. CephFS and RBDs, but I don't see anywhere it's declared that sharing an RBD between hosts is or is not possible.

I have followed the instructions on the github page of ceph-deploy (I was following the 5 minute quick start http://ceph.com/docs/next/start/quick-start/ but when I got to the step with mkcephfs it erred out and pointed me to the github page), as I only have three servers I am running the osds and monitors on all of the hosts, I realize this isn't ideal but I'm hoping it will work for testing purposes.

This is what my cluster looks like:

>> root@red6:~# ceph -s
>>    health HEALTH_OK
>>    monmap e2: 3 mons at {kitt=192.168.0.35:6789/0,red6=192.168.0.40:6789/0,shepard=192.168.0.2:6789/0}, election epoch 10, quorum 0,1,2 kitt,red6,shepard
>>    osdmap e29: 5 osds: 5 up, 5 in
>>     pgmap v1692: 192 pgs: 192 active+clean; 19935 MB data, 40441 MB used, 2581 GB / 2620 GB avail; 73B/s rd, 0op/s
>>    mdsmap e1: 0/0/1 up

To test, what I have done is created a 20GB RBD mapped it and mounted it to /media/tmp on all the hosts in my cluster, so all of the hosts are also clients.

Then I use dd to create a 1MB file named test-$hostname

>> dd if=/dev/zero of=/media/tmp/test-`hostname` bs=1024 count=1024; 

after the file is created, I wait for the writes to finish in `ceph -w`, then on each host when I list /media/tmp I see the results of /media/tmp/test-`hostname`, if I unmount then remount the RBD, I get mixed results.  Typically, I see the file that was created on the host that is at the front of the line in the quorum. e.g. the test I did while typing this e-mail "kitt" is listed first quorum 0,1,2 kitt,red6,shepard, this is the file I see created when I unmount then mount the rbd on shepard.

Where this is going is, I would like to use CEPH as my back end storage solution for my virtualization cluster.  The general idea is the hypervisors will all have a shared mountpoint that holds images and vms so vms can easily be migrated between hypervisors.  Actually, I was thinking I would create one mountpoint each for images and vms for performance reasons, am I likely to see performance gains using more smaller RBDs vs fewer larger RBDs?

Thanks for any feedback,
Jon A
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux