On Mon, 9 May 2016 23:12:46 -0400 Ricky Elrod <codeblock@xxxxxxxx> wrote: > As some of you know, some of us (mostly smooge and I) have been > working on statistics gathering for Fedora web infra as of late. > > We are working on setting up a piwik instance to be used on our > websites. However, this instance lives in the cloud and thus can't hit > the VPN. > > A question I ran into and was told to email the list about and/or > bring up at the meeting, is: What should our backups story look like > for such cases? Mostly, we have a MySQL database on this node that > will get really big fairly quickly (right now /var/lib/mysql is about > 1.1GB and piwik only has one site added, and it's only been there for > a few weeks). > > Are other cloud instances backed up in any way, and if so how was it > done? If not, how do we go about coming up with a plan for doing this, > for this instance and other instances in the future that end up having > similar requirements? Yes, we back up serveral things in the cloud, and it's done the same way as all our other backups. ;) Basically backup01 does a git checkout of the ansible repo, looks at inventory/backups for the list of backup_clients, then it runs rdiff-backups over /etc, /home, and any additional dirs specified in the hosts vars. For databases, we have a script that does a daily db dump to /backups and xz compresses it, and we back that up with rdiff-backup. How hard is the data to regenerate? ie, if we are keeping the logs that makes the data, we could in theory regen it? Or would that be too difficult? kevin
Attachment:
pgpVOmZG3oi1U.pgp
Description: OpenPGP digital signature
_______________________________________________ infrastructure mailing list infrastructure@xxxxxxxxxxxxxxxxxxxxxxx http://lists.fedoraproject.org/admin/lists/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx