Re: Cache modes libvirt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I use the same technic for my normal snapshot backups. But it’s regarding a Autodesk database. In order to have full support of Autodesk in case things go wrong, I need to follow Autodesk recommendations. That is to do a data-backup (db-dump +  file store copie) with their tool being the ADMS Console (Autodesk Data Management Console).

Instead of doing this ‘ADMS Console backup’ directly to a network backup location, I dump the data to a local disc (a Ceph RBD) and afterwards an simple rsync to the network backup location.

This dump used to take between 1 to 2 hours. Now the storage of the VM (libvirt + qemu) is migrated to a Ceph RBD it takes more than 6 hours. (Changing the cache mode of the dump-disk didn’t improve a single minute)

Worth mentioning: The files in the file store are a lot of small files. The dump is not a simple sql-dump + coping files. ADMS doesn’t use any snapshot method and still the database is accessible. As far as I can follow the dump process, it creats a list of files+data to be copied; copies file after file and instantaneously registering the file in the dump database.

My ceph system is a simple 3-node cluster with 5 OBD’s per node on a 10Gb network.


Van: E Taka <0etaka0@xxxxxxxxx>
Verzonden: woensdag 30 november 2022 16:50
Aan: Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>
CC: ceph-users@xxxxxxx
Onderwerp: Re:  Cache modes libvirt

Some information is missing to give a helpful answer.

How do you backup? (Files? RBD via Ceph? Block Device with qemu-img? Which Device driver do you use (Virtio? SATA?).

In our production we use Virtio RBD and the Hypervisor standard cache mode. The Disks are snapshoted before the backup with 'qemu-img', e. g.:
virsh snapshot-create-as VM backup-VM --diskspec vda,file=/snapshots/backup-snapshot-VM-vda.raw --disk-only --atomic --quiesce --no-metadata
qemu-img convert -O raw rbd:libvirt-pool/VM.raw /backup/VM.raw -p
virsh blockcommit VM vda --wait --active --pivot

The backup process is as fast as expected.

Am Mi., 30. Nov. 2022 um 16:09 Uhr schrieb Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx<mailto:dominique.ramaekers@xxxxxxxxxx>>:
>
> Hi,
>
> I was wondering...
>
> In Ceph/Libvirt docs only cachmodes writetrough and writeback are discussed. My clients's disks are all set to writeback in the libvirt client xml-definition.
>
> For a backup operation, I notice a severe lag on one of my VM's. Such a backup operation that takes 1 to 2 hours (on a same machine but) on local LVM storage, takes on a ceph storage 6 hours. Though I have to say that the cachemode on the local LVM storage is set on directsync.
>
> => So what if I played around with other cachemodes? Will it make a difference?
>
> I'm trying now cachemode 'unsafe' so tonight I should measure the difference if there is any.
>
> Input will be greatly appreciated.
>
> Greetings,
>
> Dominique.
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx<mailto:ceph-users@xxxxxxx>
> To unsubscribe send an email to ceph-users-leave@xxxxxxx<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux