Re: Multi-threaded virsh migrate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Originally this was being performed and tested without compression, both live and cold seem to be the same speed, around ~1.2gbps

No explicit restrictions on the core switch, same data center, same switch.

Running twin 10gb Intel X710's on Dell R630 equipment, SSD's, pretty nice rigs.

Whether using scp or iperf I can make ~9.6gbps 

I mean I can work around it offline with manual chunks, it would just be really cool to do live migration at those speeds for these mammoth guest volumes.  

As you said there's something in the mix with QEMU - I guess we will wait and see, I'm glad someone's working on it.

On Wed, May 9, 2018 at 4:49 AM, Daniel P. Berrangé <berrange@xxxxxxxxxx> wrote:
On Mon, May 07, 2018 at 11:55:14AM -0400, Shawn Q wrote:
> Hi folk, we are using 10gb NICs with multithreaded compression.
>
> We're finding that the standard `virsh migrate` gets at most ~1.2gbps,
> similar to a single scp session.

Hmm, I didn't actively measure the throughput when I tested, but looking
at the results and trying to infer bandwidth, I'm fairly sure I saw in
excess of 1.2gbps. With scp I would expect transfer rate to be limited
by the speed your CPUS can perform AES encryption.

Your migration command is just transferring plaintext so should not be
limited in that way and should thus get better results. I wonder if
the compression code is harming you because that burns massive amounts
of CPU time, often for minimal compression benefit.

> When we do a multipart upload with multiple scp connections we can squeeze
> as high as 9.6gbps.
>
> Is there was a way to get `virsh migrate` to perform multiple connections
> as well when transferring?

There is work underway in QEMU to add this feature, but it will be a
while before it is ready.

> Would be useful to be able to migrate big guests using the full capacity of
> the 10gb nics.
>
> Our example command to migrate:
>
> # virsh migrate --compressed --comp-methods mt --comp-mt-level 9
> --comp-mt-threads 16 --comp-mt-dthreads 16 --verbose --live
> --copy-storage-all --undefinesource --unsafe --persistent
> dev-testvm790.mydomain.net qemu+tcp://myhost-10g.mydomain.net/system

I would caution that the compression code should not be assumed to be
beneficial. When I tested compression against extreme guest workloads,
it was actually harmful[1] - it burns a lot of CPU time, which takes
significant time away from the guest OS vCPUs, while not getting very
good compression.

If you have trouble with migration not completing, the best advice
at this time is to use the post-copy migration method. This will
guarantee that migration can complete in finite time, while having
the lower impact on guest vCPUs, assuming you let it run pre-copy
to copy bulk of RAM, before flipping to post-copy.

Regards,
Daniel

[1] https://www.berrange.com/posts/2016/05/12/analysis-of-techniques-for-ensuring-migration-completion-with-kvm/
--
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

_______________________________________________
libvirt-users mailing list
libvirt-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvirt-users

[Index of Archives]     [Virt Tools]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux