Oh, and note there's recently been an RBD caching mode added, this would mess up any multi-mount or sharing you'd attempt to do of an RBD device. It's optional, though. http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/6402 Still, you'd probably need something on top of it to manage sharing, just like with any other block device. I'm interested to see what the devs say though. On Sat, Aug 11, 2012 at 6:35 PM, Marcus Sorensen <shadowsor@xxxxxxxxx> wrote: > What I mean is that my understanding of RBD is that it is designed to > do no more than to present a block device. In that context, what > you're asking is more like whether they will support persistent > reservations. RBD being just a block device is at a lower level, and > you'd have to add something on top of it that is aware of > sharing/locking. > > On Sat, Aug 11, 2012 at 6:26 PM, Marcus Sorensen <shadowsor@xxxxxxxxx> wrote: >> But you put a /dev/drbd atop a block device. >> >> On Aug 11, 2012 6:09 PM, "Sébastien Han" <han.sebastien@xxxxxxxxx> wrote: >>> >>> Hi Marcus, >>> >>> I didn't really get your first sentence, but I don't think so, for >>> instance DRBD manages his own /dev/drbd device and voluntary puts a >>> lock on a resource with a 'secondary' state, like the single-primary >>> mode. So I would say that it's even higher than lower... >>> >>> Cheers! >>> >>> On Sun, Aug 12, 2012 at 1:58 AM, Marcus Sorensen <shadowsor@xxxxxxxxx> >>> wrote: >>> > Isn't it supposed to be lower level than that? More like just a block >>> > device >>> > such as a SAN or iscsi device? DRBD(GFS,OCFS,CLVM) goes on top of that. >>> > >>> > On Aug 11, 2012 5:51 PM, "Sébastien Han" <han.sebastien@xxxxxxxxx> >>> > wrote: >>> >> >>> >> Hi guys, >>> >> >>> >> With RBD images, the theory makes possible to mount them multiple >>> >> times on different servers, of course **no one** wants that. If you >>> >> care about the consistency of your data :D >>> >> I was wondering if ceph has any lock ability on the RBD device like >>> >> DRBD does with secondary resource. Apparently not, I was able to mount >>> >> an image on multiple server and wrote data on both. >>> >> >>> >> Is this an incoming feature? >>> >> >>> >> I don't really know the difficulty level that this kind of feature >>> >> implies, but it would be nice to have it. >>> >> >>> >> Cheers! >>> >> -- >>> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" >>> >> in >>> >> the body of a message to majordomo@xxxxxxxxxxxxxxx >>> >> More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html