Re: KVM vs. incremental remote backups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



What *I* do for backing up KVM VMs is that I use LVM volumes, not QCOW2 
images.  Then I take a LVM "snapshot" volume, then mount that locally / 
readonly on the host and use tar (via Amanda).  Another option is to install 
Amanda's client on the VM itself and use Amanda to use tar (running on the VM) 
-- I use the latter to deal with VMs that have a FS that it not mountable on 
the host (usually due to ext4 version issues -- CentOS 6's mount.ext4 did not 
like Ubuntu's 18.04 ext4 fs).  I have always found using container image files 
with VMs a bit too opaque.

Since you are using QCOW2 images, you best option would be to treat the VMs 
as if they were just bare metal servers and rsync over the virtual network 
(ala 'rsync -a vmhostname:/ backupserver:/backupdisk/vmhostname_backup/') and 
not even try to backup the QCOW2 image files, except maybe once in awhile for 
"disaster" recovery purposes (eg if you need  to recreate th VM from scratch 
from a known state).



At Wed, 31 Mar 2021 14:41:09 +0200 CentOS mailing list <centos@xxxxxxxxxx> wrote:

> 
> Hi,
> 
> Up until recently I've hosted all my stuff (web & mail) on a handful of bare
> metal servers. Web applications (WordPress, OwnCloud, Dolibarr, GEPI,
> Roundcube) as well as mail and a few other things were hosted mostly on one big
> machine.
> 
> Backups for this setup were done using Rsnapshot, a nifty utility that combines
> Rsync over SSH and hard links to make incremental backups.
> 
> This approach has become problematic, for several reasons. First, web
> applications have increasingly specific and sometimes mutually exclusive
> requirements. And second, last month I had a server crash, and even though I
> had backups for everything, this meant quite some offline time.
> 
> So I've opted to go for KVM-based solutions, with everything split up over a
> series of KVM guests. I wrapped my head around KVM, played around with it (a
> lot) and now I'm more or less ready to go.
> 
> One detail is nagging me though: backups.
> 
> Let's say I have one VM that handles only DNS (base installation + BIND) and
> one other VM that handles mail (base installation + Postfix + Dovecot).
> 
> Under the hood that's two QCOW2 images stored in /var/lib/libvirt/images.
> 
> With the old "bare metal" approach I could perform remote backups using Rsync,
> so only the difference between two backups would get transferred over the
> network. Now with KVM images it looks like every day I have to transfer the
> whole image again. As soon as some images have lots of data on them (say, 100
> GB for a small OwnCloud server), this quickly becomes unmanageable.
> 
> I googled around quite some time for "KVM backup best practices" and was a bit
> puzzled to find many folks asking the same question and no real answer, at
> least not without having to jump through burning loops.
> 
> Any suggestions ?
> 
> Niki
> 

-- 
Robert Heller             -- Cell: 413-658-7953 GV: 978-633-5364
Deepwoods Software        -- Custom Software Services
http://www.deepsoft.com/  -- Linux Administration Services
heller@xxxxxxxxxxxx       -- Webhosting Services
                                       
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos



[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]


  Powered by Linux