https://bugzilla.kernel.org/show_bug.cgi?id=199727 --- Comment #15 from Roland Kletzing (devzero@xxxxxx) --- yes, i was using cache=none and io_uring also caused issues. >aio=threads avoids softlockups because the preadv(2)/pwritev(2)/fdatasync(2) > syscalls run in worker threads that don't take the QEMU global mutex. >Therefore vcpu threads can execute even when I/O is stuck in the kernel due to >a lock. yes, that was a long search/journey to get to this information/params.... regarding io_uring - after proxmox enabled it as default, it was taken back again after some issues had been reported. have look at: https://github.com/proxmox/qemu-server/blob/master/debian/changelog maybe it's not ready for primetime yet !? -- Proxmox Support Team <support@xxxxxxxxxxx> Fri, 30 Jul 2021 16:53:44 +0200 qemu-server (7.0-11) bullseye; urgency=medium <snip> * lvm: avoid the use of io_uring for now <snip> -- Proxmox Support Team <support@xxxxxxxxxxx> Fri, 23 Jul 2021 11:08:48 +0200 qemu-server (7.0-10) bullseye; urgency=medium <snip> * avoid using io_uring for drives backed by LVM and configured for write-back or write-through cache <snip> -- Proxmox Support Team <support@xxxxxxxxxxx> Mon, 05 Jul 2021 20:49:50 +0200 qemu-server (7.0-6) bullseye; urgency=medium <snip> * For now do not use io_uring for drives backed by Ceph RBD, with KRBD and write-back or write-through cache enabled, as in that case some polling/IO may hang in QEMU 6.0. <snip> -- You may reply to this email to add a comment. You are receiving this mail because: You are watching the assignee of the bug.