https://bugzilla.kernel.org/show_bug.cgi?id=199727 --- Comment #9 from Roland Kletzing (devzero@xxxxxx) --- https://qemu-devel.nongnu.narkive.com/I59Sm5TH/lock-contention-in-qemu <snip> I find the timeslice of vCPU thread in QEMU/KVM is unstable when there are lots of read requests (for example, read 4KB each time (8GB in total) from one file) from Guest OS. I also find that this phenomenon may be caused by lock contention in QEMU layer. I find this problem under following workload. <snip> Yes, there is a way to reduce jitter caused by the QEMU global mutex: qemu -object iothread,id=iothread0 \ -drive if=none,id=drive0,file=test.img,format=raw,cache=none \ -device virtio-blk-pci,iothread=iothread0,drive=drive0 Now the ioeventfd and thread pool completions will be processed in iothread0 instead of the QEMU main loop thread. This thread does not take the QEMU global mutex so vcpu execution is not hindered. This feature is called virtio-blk dataplane. <snip> i tried "virtio scsi single" with "aio=threads" and "iothread=1" in proxmox, and after that, even with totally heavy read/write io inside 2 VMs (located on the same spinning hdd on top of zfs lz4 + zstd dataset and qcow) and severe write starvation (some ioping >>30s), even while live migrating both vm disks in parallel to another zfs dataset on the same hdd, i get absolutely NO jitter in ping anymore. ping to both VMs is constantly at <0.2ms from the kvm pid: -object iothread,id=iothread-virtioscsi0 -device virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0 -drive file=/hddpool/vms-files-lz4/images/116/vm-116-disk-3.qcow2,if=none,id=drive-scsi0,cache=writeback,aio=threads,format=qcow2,detect-zeroes=on -device scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100 -- You may reply to this email to add a comment. You are receiving this mail because: You are watching the assignee of the bug.