Re: How to backup hundreds or thousands of TB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Snapshot on same storage cluster should definitely NOT be treated as
backup

Snapshot as a source for backup however can be pretty good solution for
some cases, but not every case.

For example if using ceph to serve static web files, I'd rather have
possibility to restore given file from given path than snapshot of
whole multiple TB cluster.

There are 2 cases for backup restore:

* something failed, need to fix it - usually full restore needed
* someone accidentally removed a thing, and now they need a thing back

Snapshots fix first problem, but not the second one, restoring 7TB of
data to recover few GBs is not reasonable.

As it is now we just backup from inside VMs (file-based backup) and have
puppet to easily recreate machine config but if (or rather when) we
would use object store we would backup it in a way that allows for
partial restore.

On Wed, 6 May 2015 10:50:34 +0100, Nick Fisk <nick@xxxxxxxxxx> wrote:
> For me personally I would always feel more comfortable with backups on a completely different storage technology.
> 
> Whilst there are many things you can do with snapshots and replication, there is always a small risk that whatever causes data loss on your primary system may affect/replicate to your 2nd copy.
> 
> I guess it all really depends on what you are trying to protect against, but Tape still looks very appealing if you want to maintain a completely isolated copy of data.
> 
> > -----Original Message-----
> > From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> > Alexandre DERUMIER
> > Sent: 06 May 2015 10:10
> > To: Götz Reinicke
> > Cc: ceph-users
> > Subject: Re:  How to backup hundreds or thousands of TB
> > 
> > for the moment, you can use snapshot for backup
> > 
> > https://ceph.com/community/blog/tag/backup/
> > 
> > I think that async mirror is on the roadmap
> > https://wiki.ceph.com/Planning/Blueprints/Hammer/RBD%3A_Mirroring
> > 
> > 
> > 
> > if you use qemu, you can do qemu full backup. (qemu incremental backup is
> > coming for qemu 2.4)
> > 
> > 
> > ----- Mail original -----
> > De: "Götz Reinicke" <goetz.reinicke@xxxxxxxxxxxxxxx>
> > À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
> > Envoyé: Mercredi 6 Mai 2015 10:25:01
> > Objet:  How to backup hundreds or thousands of TB
> > 
> > Hi folks,
> > 
> > beside hardware and performance and failover design: How do you manage
> > to backup hundreds or thousands of TB :) ?
> > 
> > Any suggestions? Best practice?
> > 
> > A second ceph cluster at a different location? "bigger archive" Disks in good
> > boxes? Or tabe-libs?
> > 
> > What kind of backupsoftware can handle such volumes nicely?
> > 
> > Thanks and regards . Götz
> > --
> > Götz Reinicke
> > IT-Koordinator
> > 
> > Tel. +49 7141 969 82 420
> > E-Mail goetz.reinicke@xxxxxxxxxxxxxxx
> > 
> > Filmakademie Baden-Württemberg GmbH
> > Akademiehof 10
> > 71638 Ludwigsburg
> > www.filmakademie.de
> > 
> > Eintragung Amtsgericht Stuttgart HRB 205016
> > 
> > Vorsitzender des Aufsichtsrats: Jürgen Walter MdL Staatssekretär im
> > Ministerium für Wissenschaft, Forschung und Kunst Baden-Württemberg
> > 
> > Geschäftsführer: Prof. Thomas Schadt
> > 
> > 
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Mariusz Gronczewski, Administrator

Efigence S. A.
ul. Wołoska 9a, 02-583 Warszawa
T: [+48] 22 380 13 13
F: [+48] 22 380 13 14
E: mariusz.gronczewski@xxxxxxxxxxxx
<mailto:mariusz.gronczewski@xxxxxxxxxxxx>

Attachment: pgpEtxWP7Twa0.pgp
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux