Re: Question about KVM IO performance with FreeBSD as a guest OS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 2019-06-28 11:53, schrieb Stefan Hajnoczi:
On Sun, Jun 23, 2019 at 03:46:29PM +0200, Rainer Duffner wrote:
I have huge problems running FreeBSD 12 (amd64) as a KVM guest.

KVM is running on Ubuntu 18 LTS, in an OpenStack setup with dedicated Ceph-Storage (NVMe SSDs).

The VM „flavor" as such is that IOPs are limited to 2000/s - and I do get those 2k IOPs when I run e.g. CentOS 7.

But on FreeBSD, I get way less.

E.g. running dc3dd to write zeros to a disk, I get 120 MB/s on CentOS 7.
With FreeBSD, I get 9 MB/s.


The VMs were created on an OpenSuSE 42.3 host with the commands described here:

https://docs.openstack.org/image-guide/freebsd-image.html


This mimics the results we got on XenServer, where also some people reported the same problems but other people had no problems at all.

Feedback from the FreeBSD community suggests that the problem is not unheard of, but also not universally reproducible.
So, I assume it must be some hypervisor misconfiguration?

I’m NOT the administrator of the KVM hosts. I can ask them tomorrow, though.

I’d like to get some ideas on what to look for on the hosts directly, if that makes sense.

Hi Rainer,
Maybe it's the benchmark.  Can you share the exact command-line you are
running on CentOS 7 and FreeBSD?

The blocksize and amount of parallelism (queue depth or number of
processes/threads) should be identical on CentOS and FreeBSD.  The
benchmark should open the file with O_DIRECT.  It should not fsync()
(flush) after every write request.

If you are using large blocksizes (>256 KB) then perhaps the guest I/O
stack is splitting them up differently on FreeBSD and Linux.

Here is a sequential write benchmark using dd:

  dd if=/dev/zero of=/dev/vdX oflag=direct bs=4k count=1048576


Hi,

you can read more about it here:

https://forums.freebsd.org/threads/is-kvm-virtio-really-that-slow-on-freebsd.71186/


I used

[root@centos ~]# fio -filename=/mnt/test.fio_test_file -direct=1 -iodepth 4 -thread -rw=randrw -ioengine=psync -bs=4k -size 8G -numjobs=4 -runtime=60 -group_reporting -name=pleasehelpme


Also on FreeBSD.


FreeBSD's dd doesn't have the oflag=direct option.


We tried the work-around described here:
https://www.cyberciti.biz/faq/slow-performance-issues-of-openbsd-or-freebsd-kvm-guest-on-linux/

But it doesn't really change anything. Most likely, the fixes hinted in the mailing-list have already gone into the kernel at this point.

The compute-nodes are running
4.15.0-45-generic #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux


We saw similar behavior on XenServer 6.5 and real-world performance matched the (abysmal) benchmark-results we got.

The "normal" FreeBSD dd does about 1.1MB/s right now.

On my desktop PC (OpenSuSE Leap 42.3) (some i5 6500) with a cheap-ass 256GB Samsung desktop OEM SSD, I get 55MB/s using the above dd command.

The servers we use are pretty high end SuperMicro servers, with Mellanox 40GBit cards etc.pp.




Best Regards
Rainer



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux