On 08/18/2013 10:58 PM, Wolfgang Hennerbichler wrote:
On Sun, Aug 18, 2013 at 06:57:56PM +1000, Martin Rudat wrote:
Hi,
On 2013-02-25 20:46, Wolfgang Hennerbichler wrote:
maybe some of you are interested in this - I'm using a dedicated VM to
backup important VMs which have their storage in RBD. This is nothing
fancy and not implemented perfectly, but it works. The VM's don't notice
that they're backed up, the only requirement is that the filesystem of
the VM is directly on the RBD, the script doesn't calculate offsets of
partition tables.
Looking at how you're doing that, if you trust the script to be able
to create new snapshots; couldn't you do that with less machinery
involved by installing the ceph binaries on the backup host,
creating the snapshot and attaching it with rbd, rather than
attaching it to the VM?
this was written at a time where kernels could not map format 2 rbd images.
Also; where's the fsck call? You're snapshotting a running system;
it's almost guaranteed that you've done the snapshot in the middle
of a batch of writes; then again, it would be cool to be able to ask
the VM to sync, to capture a consistent filesystem, though.
I use journaling filesystems. The journal is replayed during mount (can be seen in kernel logs) and the FS is therefore considered to be clean.
I don't know about recent kernels, but older ones could be made to
crash by boldly mounting a filesystem that hadn't been fscked.
This works for production systems. That's what journals are all about, right?
Correct, but older kernels might not respect barriers correctly. But if
you use a modern kernel ( I think >2.6.36 or so) there won't be a problem.
Like you said, on mount the journal will be replayed and the FS will be
clean.
It's nothing less then an unexpected shutdown.
Wido
wogri
--
Martin Rudat
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Wido den Hollander
42on B.V.
Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com