On Mon, 11 Feb 2013, Wolfgang Hennerbichler wrote: > Hi, > > I'm currently putting my first ceph VM cluster into production, it's a > cluster where data integrity matters, and I hope ceph won't let me down. > > Currently I'm trying to figure out how to back up virtual machines. My > plan was originally: > * Create a snapshot from a VM > * Clone that snapshot > * map that snapshot > * mount that snapshot, let the journaling filesystem replay it's journal > * backup that snapshot There is one more wrinkle that you should test: mounting the read-only snapshot in some fs's (ext4 at least, iirc) actually writes to the device to replay the journal. At least this was true at one point in time. > it turns out that this is a little more complicated than I thought. > > * cloning is only supported with format=2 > * rbd kernel blockdevice is only supported with format=1 > > so now I'm stuck. I'd also be happy to play around with fuse or > something, anybody got any tips? If you can use KVM + librbd, that would be the ideal route, as cloning is fully supported there. Alternatively, if you can avoid using cloning, at least temporarily, it will be available for the kernel client in a (kernel) release or two. > Is format=2 in any respect more 'unstable' than format=1? I do have to > decide these days if all the vm's will be running on format=1 or > format=2. I wouldn't consider it any less stable than format=1. In fact, a few things that behave strangely with format=1 (like renaming an image while it is in use) work better with format=2. sage > > thanks a lot for you answers > Wolfgang > > -- > DI (FH) Wolfgang Hennerbichler > Software Development > Unit Advanced Computing Technologies > RISC Software GmbH > A company of the Johannes Kepler University Linz > > IT-Center > Softwarepark 35 > 4232 Hagenberg > Austria > > Phone: +43 7236 3343 245 > Fax: +43 7236 3343 250 > wolfgang.hennerbichler@xxxxxxxxxxxxxxxx > http://www.risc-software.at > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com