Re: KVM pci-assign - iommu width is not sufficient for mapped address

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Alex,

It will be hard to reproduce this on Fedora/RHEL. We have Ubuntu based
server/VM & I can shift to any kernel/qemu/vfio versions that you
recommend.

Both our Host & Guest run Ubuntu Trusty (Ubuntu 14.04.3 LTS) with
Linux Kernel version 3.18.19 (from
http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.18.19-vivid/).

Qemu version on the host is
QEMU emulator version 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.21), Copyright
(c) 2003-2008 Fabrice Bellard

We are using 8 X Intel RMS3CC080 SSD's for this test. We expose these
SSD's to the VM (through iSER) & then setup dm-stripe over them within
the VM. We create two dm-linear out of this at 100GB size & expose
through SCST to an external server. External server iSER connects to
these devices & have multipath 4Xpaths (policy: queue-length:0) per
device. From external server we run fio with 4 threads & each with
64-outstanding IOs of 100% 4K random-reads.

This is the performance difference we see

with PCI-assign to the VM
randrw 100:0 64iodepth 4thr 4kb - R: 550,224K wait_us:2,245 cpu
tot:85.57 usr:3.96 sys:31.55 iow:50.06

i.e. we get 137-140K IOPs or 550MB/s

with VFIO to the VM
randrw 100:0 64iodepth 4thr 4kb - R: 309,432K wait_us:3,964 cpu
tot:78.58 usr:2.28 sys:18.00 iow:58.30

i.e. we get 77-80K IOPs or 310MB/s

The only change between the two runs is to have a VM that is spawned
with VFIO instead of pci-assign. There is no other difference in
software versions or any settings.

$ grep VFIO /boot/config-`uname -r`
CONFIG_VFIO_IOMMU_TYPE1=m
CONFIG_VFIO=m
CONFIG_VFIO_PCI=m
CONFIG_VFIO_PCI_VGA=y
CONFIG_KVM_VFIO=y

I uploaded QEMU command-line & lspci outputs at
https://www.dropbox.com/s/imbqn0274i6hhnz/vfio_issue.tgz?dl=0

Pls let me know if you have any issues in downloading it.

Please let us know if you see any KVM acceleration is disabled &
suggested next steps to debug with VFIO tracing. Thanks for your help!

--Shyam

On Fri, Jan 8, 2016 at 10:23 AM, Alex Williamson
<alex.williamson@xxxxxxxxxx> wrote:
> On Fri, 2016-01-08 at 09:47 +0530, Shyam wrote:
>> Hi Alex,
>>
>> Thanks for your inputs.
>>
>> We are using Mellanox ConnectX-3 iSER SRIOV capable NICs. We
>> provision
>> these VF's into the VM. The VM connects to few SSD drives through
>> iSER. For this performance test, if we expose the same SSDs through
>> iSER out of VM to servers & run vdbench 4K read/write workloads we
>> see
>> this significant performance drop when using vfio. These VM's have 8
>> hyper-threads from Intel E5-2680 v3 server & 32GB RAM. The key
>> observation is with vfio the cpu saturates much earlier & hence
>> cannot
>> allow us to scale IOPs.
>>
>> I will open a separate mail thread about this performance degradation
>> using vfio with numbers. In the meantime if you can suggest how to
>> look for performance issue or what logs you would prefer for VFIO
>> debugging it will help in getting the needed info for you.
>
> Hi Shyam,
>
> For the degree of performance loss you're experiencing, I'd suspect
> some sort of KVM acceleration is disabled.  Would it be possible to
> reproduce your testing on a host running something like Fedora 23 or
> RHEL7/Centos7 where we know that the kernel and QEMU are fully enabled
> for vfio?
>
> Other useful information:
>
>  * QEMU command line or libvirt logs for VM in each configuration
>  * lspci -vvv of VF from host while in operation in each config
>  * QEMU version
>  * grep VFIO /boot/config-`uname -r` (or wherever the running kernel
>    config is on your system)
> For a well behaved VF, device assignment should mostly setup VM access
> and get out of the way, there should be little opportunity to inflict
> such a high performance difference.  If we can't spot anything obvious
> and it's reproducible on a known kernel and QEMU, we can look into
> tracing to see what might be happening.  Thanks,
>
> Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux