On Mon, 30 Jan 2012, Avi Kivity wrote: > On 01/30/2012 05:32 PM, Eric B Munson wrote: > > > > > > Can you point me to the discussion that moved this to be a vm ioctl? In > > > general vm ioctls that do things for all vcpus are racy, like here. > > > You're accessing variables that are protected by the vcpu mutex, and not > > > taking the mutex (nor can you, since it is held while the guest is > > > running, unlike most kernel mutexes). > > > > > > > Jan Kiszka suggested that becuase there isn't a use case for notifying > > individual vcpus (can vcpu's be paused individually? > > They can, though the guest will grind to a halt very soon. > > > ) that it makes more sense > > to have a vm ioctl. > > > > http://thread.gmane.org/gmane.comp.emulators.qemu/131624 > > > > If the per vcpu ioctl is the right choice I can resend those patches. > > The races are solvable but I think it's easier in userspace. It's also > more flexible, though I don't really see a use for this flexibility. > Okay, I will rebase the per vcpu patches and resend those. Eric
Attachment:
signature.asc
Description: Digital signature