>>> On 9/19/2010 at 01:02 PM, in message <20100919170231.GA12620@xxxxxxxxxx>, "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote: > I think I see the following (theoretical) race: > > During irqfd assign, we drop irqfds lock before we > schedule inject work. Therefore, deassign running > on another CPU could cause shutdown and flush to run > before inject, causing user after free in inject. > > A simple fix it to schedule inject under the lock. I swear there was some reason why the schedule_work() was done outside of the lock, but I can't for the life of me remember why anymore (it obviously was a failing on my part to _comment_ why if there was such a reason). So, short of recalling what that reason was, and the fact that Michael's theory seems rational and legit... Acked-by: Gregory Haskins <ghaskins@xxxxxxxxxx> > > Signed-off-by: Michael S. Tsirkin <mst@xxxxxxxxxx> > --- > > If the issue is real, this might be a 2.6.36 and -stable > candidate. Comments? > > virt/kvm/eventfd.c | 3 ++- > 1 files changed, 2 insertions(+), 1 deletions(-) > > diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c > index 66cf65b..c1f1e3c 100644 > --- a/virt/kvm/eventfd.c > +++ b/virt/kvm/eventfd.c > @@ -218,7 +218,6 @@ kvm_irqfd_assign(struct kvm *kvm, int fd, int gsi) > events = file->f_op->poll(file, &irqfd->pt); > > list_add_tail(&irqfd->list, &kvm->irqfds.items); > - spin_unlock_irq(&kvm->irqfds.lock); > > /* > * Check if there was an event already pending on the eventfd > @@ -227,6 +226,8 @@ kvm_irqfd_assign(struct kvm *kvm, int fd, int gsi) > if (events & POLLIN) > schedule_work(&irqfd->inject); > > + spin_unlock_irq(&kvm->irqfds.lock); > + > /* > * do not drop the file until the irqfd is fully initialized, otherwise > * we might race against the POLLHUP -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html