Am 07.01.2011 18:53, Gleb Natapov wrote: > On Fri, Jan 07, 2011 at 06:30:57PM +0100, Jan Kiszka wrote: >> Am 07.01.2011 18:16, Gleb Natapov wrote: >>> On Fri, Jan 07, 2011 at 05:59:34PM +0100, Jan Kiszka wrote: >>>> Am 07.01.2011 17:53, Gleb Natapov wrote: >>>>> On Fri, Jan 07, 2011 at 04:57:31PM +0100, Jan Kiszka wrote: >>>>>> Hi, >>>>>> >>>>>> does anyone immediately know if this hunk from vl.c >>>>>> >>>>>> @@ -1278,6 +1197,10 @@ void qemu_system_reset_request(void) >>>>>> } else { >>>>>> reset_requested = 1; >>>>>> } >>>>>> + if (cpu_single_env) { >>>>>> + cpu_single_env->stopped = 1; >>>>>> + cpu_exit(cpu_single_env); >>>>>> + } >>>>>> qemu_notify_event(); >>>>>> } >>>>>> >>>>>> is (semantically) relevant for upstream as well? IIUC, it ensures that >>>>>> the kvm cpu loop is not continued if an IO access called into >>>>>> qemu_system_reset_request. >>>>>> >>>>> I don't know TCG enough to tell. If TCG can continue vcpu execution >>>>> after io without checking reset_requested then it is relevant for >>>>> upstream too. >>>> >>>> I was first of all thinking about kvm upstream, but their handling >>>> differ much less upstream than in current qemu-kvm. Anyway, need to dig >>>> into the details. >>>> >>>>> >>>>>> If yes, then it would be a good time to push a patch: these bits will >>>>>> fall to dust on next merge from upstream (vl.c no longer has access to >>>>>> the cpu state). >>>>>> >>>>> On a next merge cpu state will have to be exposed to vl.c then. This >>>>> code cannot be dropped in qemu-kvm. >>>> >>>> I think a cleaner approach, even if it's only temporarily required, is >>>> to move that code to cpus.c. That's likely also the way when we need it >>>> upstream. >>> It doesn't matter where the code resides as long as it is called on >>> reset. >> >> It technically matters for the build process (vl.c is built once these >> days, cpus.c is built per target). >> > Yes, I understand the build requirement. Runtime behaviour should not > change. Yep, for sure. BTW, the self-IPI on pending exit request is there for a reason I but. In order to complete half-done string-io or something like that? Would be the next patch for upstream then. > >> In any case, we apparently need to fix upstream, I'm playing with some >> approach. >> >>> >>>> If upstream does not need it, we have to understand why and >>>> maybe adopt its pattern (the ultimate goal is unification anyway). >>>> >>> I don't consider kvm upstream as working product. The goal should be >>> moving to qemu-kvm code in upstream preserving all the knowledge we >>> acquired while making it production grade code. >> >> We had this discussion before. My goal remains to filter the remaining >> upstream fixes out of the noise, adjust both versions so that they are >> apparently identical, and then switch to a single version. >> > I thought there was an agreement to accept qemu-kvm implementation as is > into upstream (without some parts like device assignment). If you look > at qemu-kvm you'll see that upstream implementation is marked as > OBSOLETE_KVM_IMPL. You can't merge both trees without introducing regressions, either in the kvm part or some other section that qemu-kvm did not stress. IMO, there is no way around understanding all the nice little "fixes" that piled up over the years and translate them into proper, documented patches. > >> We are on a good track now. I predict that we will be left with only one >> or two major additional features in qemu-kvm in a few months from now, >> no more duplications with subtle differences, and production-grade kvm >> upstream stability. >> > You are optimistic. My prediction is that it will take at least one major RHEL > release until such merged code base will become production-grade. That > is when most bugs that were introduced by eliminating subtle differences > between working and non-working version will be found :) The more upstream code qemu-kvm stresses, the faster this convergence will become. And there is really not that much left. E.g, I've a qemu-kvm-x86.c here that is <400 LOC. > > BTW Do you have a plan how to move upstream to thread per vcpu? Upstream has this already, but it's - once again - a different implementation. Understanding those differences is one of the next steps. In fact, as posted recently, unifying the execution model implementations is the only big problem I see. In-kernel irqchips and device assignment are things that can live in qemu-kvm without much conflicts until they are finally mergable. Jan
Attachment:
signature.asc
Description: OpenPGP digital signature