Re: KVM+Ceph: Live migration of I/O-heavy VM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11.12.2018 17:39, Lionel Bouton wrote:
Le 11/12/2018 à 15:51, Konstantin Shalygin a écrit :

Currently I plan a migration of a large VM (MS Exchange, 300 Mailboxes
and 900GB DB) from qcow2 on ext4 (RAID1) to an all-flash Ceph luminous
cluster (which already holds lot's of images).
The server has access to both local and cluster-storage, I only need
to live migrate the storage, not machine.

I have never used live migration as it can cause more issues and the
VMs that are already migrated, had planned downtime.
Taking the VM offline and convert/import using qemu-img would take
some hours but I would like to still serve clients, even if it is
slower.

The VM is I/O-heavy in terms of the old storage (LSI/Adaptec with
BBU). There are two HDDs bound as RAID1 which are constantly under 30%
- 60% load (this goes up to 100% during reboot, updates or login
prime-time).

What happens when either the local compute node or the ceph cluster
fails (degraded)? Or network is unavailable?
Are all writes performed to both locations? Is this fail-safe? Or does
the VM crash in worst case, which can lead to dirty shutdown for MS-EX
DBs?

The node currently has 4GB free RAM and 29GB listed as cache /
available. These numbers need caution because we have "tuned" enabled
which causes de-deplication on RAM and this host runs about 10 Windows
VMs.
During reboots or updates, RAM can get full again.

Maybe I am to cautious about live-storage-migration, maybe I am not.

What are your experiences or advices?

Thank you very much!

I was read your message two times and still can't figure out what is your question?

You need move your block image from some storage to Ceph? No, you can't do this without downtime because fs consistency.

You can easy migrate your filesystem via rsync for example, with small downtime for reboot VM.


I believe OP is trying to use the storage migration feature of QEMU. I've never tried it and I wouldn't recommend it (probably not very tested and there is a large window for failure).


use the qemu storage migration feature via proxmox webui several times a day. never any issues.

I regularly migrate between  ceph rbd,  local directory, shared lvm over fiberchannel, nfs server.  super easy and convenient.


Ronny Aasen


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux