Hi Dave, My comments are in-line. Dr. David Alan Gilbert <dgilbert@xxxxxxxxxx> 于2018年10月1日周一 下午7:41写道: > > * Feng Li (lifeng1519@xxxxxxxxx) wrote: > > Hi, > > I found an obvious performance downgrade when virtio-console combined > > with virtio-pci-blk. > > > > This phenomenon exists in nearly all Qemu versions and all Linux > > (CentOS7, Fedora 28, Ubuntu 18.04) distros. > > > > This is a disk cmd: > > -drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native > > -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on > > > > If I add "-device > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 ", the virtio > > disk 4k iops (randread/randwrite) would downgrade from 60k to 40k. > > > > In VM, if I rmmod virtio-console, the performance will back to normal. > > > > Any idea about this issue? > > > > I don't know this is a qemu issue or kernel issue. > > It sounds odd; can you provide more details on: > a) The benchmark you're using. I'm using fio, the config is: [global] ioengine=libaio iodepth=128 runtime=120 time_based direct=1 [randread] stonewall bs=4k filename=/dev/vdb rw=randread > b) the host and the guest config (number of cpus etc) The qemu cmd is : /usr/libexec/qemu-kvm --device virtio-balloon -m 16G --enable-kvm -cpu host -smp 8 or qemu-system-x86_64 --device virtio-balloon -m 16G --enable-kvm -cpu host -smp 8 The result is the same. > c) Why are you running it with iscsi back to the same host - why not > just simplify the test back to a simple file? > Because my ISCSI target could supply a high IOPS performance. If using a slow disk, the performance downgrade would be not so obvious. It's easy to be seen, you could try it. > Dave > > > > > Thanks in advance. > > -- > > Thanks and Best Regards, > > Alex > > > -- > Dr. David Alan Gilbert / dgilbert@xxxxxxxxxx / Manchester, UK -- Thanks and Best Regards, Feng Li(Alex)