Am 26/09/2022 um 23:28 schrieb Sean Christopherson: > On Mon, Sep 26, 2022, David Hildenbrand wrote: >> On 23.09.22 15:38, Emanuele Giuseppe Esposito wrote: >>> >>> >>> Am 23/09/2022 um 15:21 schrieb David Hildenbrand: >>>> On 23.09.22 15:10, Emanuele Giuseppe Esposito wrote: >>>>> >>>>> >>>>> Am 19/09/2022 um 19:30 schrieb David Hildenbrand: >>>>>> On 19.09.22 09:53, David Hildenbrand wrote: >>>>>>> On 18.09.22 18:13, Emanuele Giuseppe Esposito wrote: >>>>>>>> >>>>>>>> >>>>>>>> Am 09/09/2022 um 16:30 schrieb Sean Christopherson: >>>>>>>>> On Fri, Sep 09, 2022, Emanuele Giuseppe Esposito wrote: >>>>>>>>>> KVM is currently capable of receiving a single memslot update >>>>>>>>>> through >>>>>>>>>> the KVM_SET_USER_MEMORY_REGION ioctl. >>>>>>>>>> The problem arises when we want to atomically perform multiple >>>>>>>>>> updates, >>>>>>>>>> so that readers of memslot active list avoid seeing incomplete >>>>>>>>>> states. >>>>>>>>>> >>>>>>>>>> For example, in RHBZ >>>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1979276 > > ... > >> As Sean said "This is an awful lot of a complexity to take on for something >> that appears to be solvable in userspace." > > And if the userspace solution is unpalatable for whatever reason, I'd like to > understand exactly what KVM behavior is problematic for userspace. E.g. the > above RHBZ bug should no longer be an issue as the buggy commit has since been > reverted. It still is because I can reproduce the bug, as also pointed out in multiple comments below. > > If the issue is KVM doing something nonsensical on a code fetch to MMIO, then I'd > much rather fix _that_ bug and improve KVM's user exit ABI to let userspace handle > the race _if_ userspace chooses not to pause vCPUs. > Also on the BZ they all seem (Paolo included) to agree that the issue is non-atomic memslots update. To be more precise, what I did mostly follows what Peter explained in comment 19 : https://bugzilla.redhat.com/show_bug.cgi?id=1979276#c19