Re: [Qemu-devel] virtio-console downgrade the virtio-pci-blk performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On (Thu) 11 Oct 2018 [18:15:41], Feng Li wrote:
> Add Amit Shah.
> 
> After some tests, we found:
> - the virtio serial port number is inversely proportional to the iSCSI
> virtio-blk-pci performance.
> If we set the virio-serial ports to 2("<controller
> type='virtio-serial' index='0' ports='2'/>), the performance downgrade
> is minimal.

If you use multiple virtio-net (or blk) devices -- just register, not
necessarily use -- does that also bring the performance down?  I
suspect it's the number of interrupts that get allocated for the
ports.  Also, could you check if MSI is enabled?  Can you try with and
without?  Can you also reproduce if you have multiple virtio-serial
controllers with 2 ports each (totalling up to whatever number that
reproduces the issue).

		Amit

> 
> - use local disk/ram disk as virtio-blk-pci disk, the performance
> downgrade is still obvious.
> 
> 
> Could anyone give some help about this issue?
> 
> Feng Li <lifeng1519@xxxxxxxxx> 于2018年10月1日周一 下午10:58写道:
> >
> > Hi Dave,
> > My comments are in-line.
> >
> > Dr. David Alan Gilbert <dgilbert@xxxxxxxxxx> 于2018年10月1日周一 下午7:41写道:
> > >
> > > * Feng Li (lifeng1519@xxxxxxxxx) wrote:
> > > > Hi,
> > > > I found an obvious performance downgrade when virtio-console combined
> > > > with virtio-pci-blk.
> > > >
> > > > This phenomenon exists in nearly all Qemu versions and all Linux
> > > > (CentOS7, Fedora 28, Ubuntu 18.04) distros.
> > > >
> > > > This is a disk cmd:
> > > > -drive file=iscsi://127.0.0.1:3260/iqn.2016-02.com.test:system:fl-iscsi/1,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
> > > > -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
> > > >
> > > > If I add "-device
> > > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5  ", the virtio
> > > > disk 4k iops (randread/randwrite) would downgrade from 60k to 40k.
> > > >
> > > > In VM, if I rmmod virtio-console, the performance will back to normal.
> > > >
> > > > Any idea about this issue?
> > > >
> > > > I don't know this is a qemu issue or kernel issue.
> > >
> > > It sounds odd;  can you provide more details on:
> > >   a) The benchmark you're using.
> > I'm using fio, the config is:
> > [global]
> > ioengine=libaio
> > iodepth=128
> > runtime=120
> > time_based
> > direct=1
> >
> > [randread]
> > stonewall
> > bs=4k
> > filename=/dev/vdb
> > rw=randread
> >
> > >   b) the host and the guest config (number of cpus etc)
> > The qemu cmd is : /usr/libexec/qemu-kvm --device virtio-balloon -m 16G
> > --enable-kvm -cpu host -smp 8
> > or qemu-system-x86_64 --device virtio-balloon -m 16G --enable-kvm -cpu
> > host -smp 8
> >
> > The result is the same.
> >
> > >   c) Why are you running it with iscsi back to the same host - why not
> > >      just simplify the test back to a simple file?
> > >
> >
> > Because my ISCSI target could supply a high IOPS performance.
> > If using a slow disk, the performance downgrade would be not so obvious.
> > It's easy to be seen, you could try it.
> >
> >
> > > Dave
> > >
> > > >
> > > > Thanks in advance.
> > > > --
> > > > Thanks and Best Regards,
> > > > Alex
> > > >
> > > --
> > > Dr. David Alan Gilbert / dgilbert@xxxxxxxxxx / Manchester, UK
> >
> >
> >
> > --
> > Thanks and Best Regards,
> > Feng Li(Alex)
> 
> 
> 
> --
> Thanks and Best Regards,
> Feng Li(Alex)

		Amit
-- 
http://amitshah.net/
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux