https://bugzilla.kernel.org/show_bug.cgi?id=199727 --- Comment #18 from Gergely Kovacs (gkovacs@xxxxxxxxx) --- Thank you Roland Kletzing for your exhaustive investigation and Stefan Hajnoczi for your insightful comments. This is a problem that has been affecting us (and many many users of Proxmox and likely vanilla KVM) for more than a decade, yet the Proxmox developers were unable to solve it or even reproduce it (despite the large number of forum threads and bugs filed), hence the reason for me creating this bugreport 4 years ago. It looks like we are closing in: the KVM global mutex could be the real culprit, as in our case the problems were only mostly gone by moving all our VM storage to NVMe (increasing IO bandwidth by a LOT), but fully gone after setting VirtIO SCSI Single / iothread=1 / aio=threads on all our KVM guests. For many years VM migrations or restores could render other VMs on the same host practically unusable for the duration of the heavy IO, now these operations can be safely done. I will experiment with io_uring in the near future and report back my findings, will leave the status NEW since I reckon attention should be given to the ring io code to achieve the same stability as threaded io. -- You may reply to this email to add a comment. You are receiving this mail because: You are watching the assignee of the bug.