On Fri, 2016-01-08 at 09:47 +0530, Shyam wrote: > Hi Alex, > > Thanks for your inputs. > > We are using Mellanox ConnectX-3 iSER SRIOV capable NICs. We > provision > these VF's into the VM. The VM connects to few SSD drives through > iSER. For this performance test, if we expose the same SSDs through > iSER out of VM to servers & run vdbench 4K read/write workloads we > see > this significant performance drop when using vfio. These VM's have 8 > hyper-threads from Intel E5-2680 v3 server & 32GB RAM. The key > observation is with vfio the cpu saturates much earlier & hence > cannot > allow us to scale IOPs. > > I will open a separate mail thread about this performance degradation > using vfio with numbers. In the meantime if you can suggest how to > look for performance issue or what logs you would prefer for VFIO > debugging it will help in getting the needed info for you. Hi Shyam, For the degree of performance loss you're experiencing, I'd suspect some sort of KVM acceleration is disabled. Would it be possible to reproduce your testing on a host running something like Fedora 23 or RHEL7/Centos7 where we know that the kernel and QEMU are fully enabled for vfio? Other useful information: * QEMU command line or libvirt logs for VM in each configuration * lspci -vvv of VF from host while in operation in each config * QEMU version * grep VFIO /boot/config-`uname -r` (or wherever the running kernel config is on your system) For a well behaved VF, device assignment should mostly setup VM access and get out of the way, there should be little opportunity to inflict such a high performance difference. If we can't spot anything obvious and it's reproducible on a known kernel and QEMU, we can look into tracing to see what might be happening. Thanks, Alex -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html