For me personally I would always feel more comfortable with backups on a completely different storage technology. Whilst there are many things you can do with snapshots and replication, there is always a small risk that whatever causes data loss on your primary system may affect/replicate to your 2nd copy. I guess it all really depends on what you are trying to protect against, but Tape still looks very appealing if you want to maintain a completely isolated copy of data. > -----Original Message----- > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of > Alexandre DERUMIER > Sent: 06 May 2015 10:10 > To: Götz Reinicke > Cc: ceph-users > Subject: Re: How to backup hundreds or thousands of TB > > for the moment, you can use snapshot for backup > > https://ceph.com/community/blog/tag/backup/ > > I think that async mirror is on the roadmap > https://wiki.ceph.com/Planning/Blueprints/Hammer/RBD%3A_Mirroring > > > > if you use qemu, you can do qemu full backup. (qemu incremental backup is > coming for qemu 2.4) > > > ----- Mail original ----- > De: "Götz Reinicke" <goetz.reinicke@xxxxxxxxxxxxxxx> > À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx> > Envoyé: Mercredi 6 Mai 2015 10:25:01 > Objet: How to backup hundreds or thousands of TB > > Hi folks, > > beside hardware and performance and failover design: How do you manage > to backup hundreds or thousands of TB :) ? > > Any suggestions? Best practice? > > A second ceph cluster at a different location? "bigger archive" Disks in good > boxes? Or tabe-libs? > > What kind of backupsoftware can handle such volumes nicely? > > Thanks and regards . Götz > -- > Götz Reinicke > IT-Koordinator > > Tel. +49 7141 969 82 420 > E-Mail goetz.reinicke@xxxxxxxxxxxxxxx > > Filmakademie Baden-Württemberg GmbH > Akademiehof 10 > 71638 Ludwigsburg > www.filmakademie.de > > Eintragung Amtsgericht Stuttgart HRB 205016 > > Vorsitzender des Aufsichtsrats: Jürgen Walter MdL Staatssekretär im > Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg > > Geschäftsführer: Prof. Thomas Schadt > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com