Re: [RFC] Unify KVM kernel-space and user-space code into a single project

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Zachary Amsden <zamsden@xxxxxxxxxx> wrote:

> On 03/18/2010 12:50 AM, Ingo Molnar wrote:
> >* Avi Kivity<avi@xxxxxxxxxx>  wrote:
> >
> >>>The moment any change (be it as trivial as fixing a GUI detail or as
> >>>complex as a new feature) involves two or more packages, development speed
> >>>slows down to a crawl - while the complexity of the change might be very
> >>>low!
> >>Why is that?
> >It's very simple: because the contribution latencies and overhead compound,
> >almost inevitably.
> >
> >If you ever tried to implement a combo GCC+glibc+kernel feature you'll know
> >...
> >
> >Even with the best-run projects in existence it takes forever and is very
> >painful - and here i talk about first hand experience over many years.
> 
> Ingo, what you miss is that this is not a bad thing.  Fact of the
> matter is, it's not just painful, it downright sucks.

Our experience is the opposite, and we tried both variants and report about 
our experience with both models honestly.

You only have experience about one variant - the one you advocate.

See the assymetry?

> This is actually a Good Thing (tm).  It means you have to get your
> feature and its interfaces well defined and able to version forwards
> and backwards independently from each other.  And that introduces
> some complexity and time and testing, but in the end it's what you
> want.  You don't introduce a requirement to have the feature, but
> take advantage of it if it is there.
> 
> It may take everyone else a couple years to upgrade the compilers,
> tools, libraries and kernel, and by that time any bugs introduced by
> interacting with this feature will have been ironed out and their
> patterns well known.

Sorry, but this is pain not true. The 2.4->2.6 kernel cycle debacle has taught 
us that waiting long to 'iron out' the details has the following effects:

 - developer pain
 - user pain
 - distro pain
 - disconnect
 - loss of developers, testers and users
 - grave bugs discovered months (years ...) down the line
 - untested features
 - developer exhaustion

It didnt work, trust me - and i've been around long enough to have suffered 
through the whole 2.5.x misery. Some of our worst ABIs come from that cycle as 
well.

So we first created the 2.6.x process, then as we saw that it worked much 
better we _sped up_ the kernel development process some more, to what many 
claimed was an impossible, crazy pace: two weeks merge window, 2.5 months 
stabilization and a stable release every 3 months.

And you can also see the countless examples of carefully drafted, well thought 
out, committee written computer standards that were honed for years, which are 
not worth the paper they are written on.

'extra time' and 'extra buerocratic overhead to think things through' is about 
the worst thing you can inject into a development process.

You should think about the human brain as a cache - the 'closer' things are 
both in time and pyshically, the better they end up being. Also, the more 
gradual, the more concentrated a thing is, the better it works out in general. 
This is part of the basic human nature.

Sorry, but i really think you are really trying to rationalize a disadvantage 
here ...

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux