Re: qemu-kvm vs. qemu: Terminate cpu loop on reset?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 07, 2011 at 06:30:57PM +0100, Jan Kiszka wrote:
> Am 07.01.2011 18:16, Gleb Natapov wrote:
> > On Fri, Jan 07, 2011 at 05:59:34PM +0100, Jan Kiszka wrote:
> >> Am 07.01.2011 17:53, Gleb Natapov wrote:
> >>> On Fri, Jan 07, 2011 at 04:57:31PM +0100, Jan Kiszka wrote:
> >>>> Hi,
> >>>>
> >>>> does anyone immediately know if this hunk from vl.c
> >>>>
> >>>> @@ -1278,6 +1197,10 @@ void qemu_system_reset_request(void)
> >>>>      } else {
> >>>>          reset_requested = 1;
> >>>>      }
> >>>> +    if (cpu_single_env) {
> >>>> +        cpu_single_env->stopped = 1;
> >>>> +        cpu_exit(cpu_single_env);
> >>>> +    }
> >>>>      qemu_notify_event();
> >>>>  }
> >>>>
> >>>> is (semantically) relevant for upstream as well? IIUC, it ensures that
> >>>> the kvm cpu loop is not continued if an IO access called into
> >>>> qemu_system_reset_request.
> >>>>
> >>> I don't know TCG enough to tell. If TCG can continue vcpu execution
> >>> after io without checking reset_requested then it is relevant for
> >>> upstream too.
> >>
> >> I was first of all thinking about kvm upstream, but their handling
> >> differ much less upstream than in current qemu-kvm. Anyway, need to dig
> >> into the details.
> >>
> >>>
> >>>> If yes, then it would be a good time to push a patch: these bits will
> >>>> fall to dust on next merge from upstream (vl.c no longer has access to
> >>>> the cpu state).
> >>>>
> >>> On a next merge cpu state will have to be exposed to vl.c then. This
> >>> code cannot be dropped in qemu-kvm.
> >>
> >> I think a cleaner approach, even if it's only temporarily required, is
> >> to move that code to cpus.c. That's likely also the way when we need it
> >> upstream. 
> > It doesn't matter where the code resides as long as it is called on
> > reset.
> 
> It technically matters for the build process (vl.c is built once these
> days, cpus.c is built per target).
> 
Yes, I understand the build requirement. Runtime behaviour should not
change.

> In any case, we apparently need to fix upstream, I'm playing with some
> approach.
> 
> > 
> >>            If upstream does not need it, we have to understand why and
> >> maybe adopt its pattern (the ultimate goal is unification anyway).
> >>
> > I don't consider kvm upstream as working product. The goal should be
> > moving to qemu-kvm code in upstream preserving all the knowledge we
> > acquired while making it production grade code.
> 
> We had this discussion before. My goal remains to filter the remaining
> upstream fixes out of the noise, adjust both versions so that they are
> apparently identical, and then switch to a single version.
> 
I thought there was an agreement to accept qemu-kvm implementation as is
into upstream (without some parts like device assignment). If you look
at qemu-kvm you'll see that upstream implementation is marked as
OBSOLETE_KVM_IMPL.

> We are on a good track now. I predict that we will be left with only one
> or two major additional features in qemu-kvm in a few months from now,
> no more duplications with subtle differences, and production-grade kvm
> upstream stability.
> 
You are optimistic. My prediction is that it will take at least one major RHEL
release until such merged code base will become production-grade. That
is when most bugs that were introduced by eliminating subtle differences
between working and non-working version will be found :)

BTW Do you have a plan how to move upstream to thread per vcpu?

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux