The reason why I like block device is that it has the best reading performance. But not being able to be shared is a fatal drawback here. Maybe I should change some strategy or tuning the ceph fs. My goal is to use a cluster to host a lot of small images, each of which is about 17KB.
--
786-554-3993
+86-138-1174-5701
On Wed, May 1, 2013 at 11:31 AM, Mike Lowe <j.michael.lowe@xxxxxxxxx> wrote:
That is the expected behavior. RBD is emulating a real device, you wouldn't expect good things to happen if you were to plug the same drive into two different machines at once (perhaps with some soldering). There is no built in mechanism for two machines to access the same block device concurrently, you would need to add a bunch of extra stuff in the same way that ocfs or gfs has. CephFS on the other hand is a parallel filesystem designed for multiple concurrent client access and does have such mechanisms (with limits, read the docs).
> _______________________________________________
On May 1, 2013, at 11:19 AM, Yudong Guang <guangyudongbuaa@xxxxxxxxx> wrote:
> Hi,
>
> I've been trying to use block device recently. I have a running cluster with 2 machines and 3 OSDs.
>
> On a client machine, let's say A, I created a rbd image using `rbd create` , then formatted, mounted and wrote something in it, everything was working fine.
>
> However, problem occurred when I tried to use this image on the other client, let's say B, on which I mapped the same image that created on A. I found that any changes I made on any of them cannot be shown on the other client, but if I unmap the device and then map again, the changes will be shown.
>
> I tested the same thing with ceph fs, but there was no such problem. Every change made on one client can be shown on the other client instantly.
>
> I wonder whether this kind of behavior of RADOS block device is normal or not. Is there any way that we can read and write on the same image on multiple clients?
>
> Any idea is appreciated.
>
> Thanks
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Yudong Guang
guangyudongbuaa@xxxxxxxxx786-554-3993
+86-138-1174-5701
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com