Re: Improving the compose: leave the current compose in place

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 27, 2018 at 7:01 PM Paul Frields <stickster@xxxxxxxxx> wrote:
>
> On Tue, Nov 27, 2018 at 9:59 AM Owen Taylor <otaylor@xxxxxxxxxx> wrote:
> > A lot of discussion about improving the compose process seem to end up
> > with a "reality check" - that ideas have already been tried but don't
> > work because of requirements a) b) c) d). You can't have the pony, but
> > maybe if a lot of effort is put into it, you can have a faster rocking
> > horse.
> >
> > If want to fundamentally improve the Fedora workflow we need compose
> > ponies, we can't just have rocking horses!
> >
> > Perhaps it would make sense to leave the current 8-10 hour compose in
> > place for the forseeable future, and work on a new system in parallel
> > where the primary constraint is to be as fast as possible. Hopefully
> > most problems with the slow compose will get sorted out in the fast
> > composes, and the slow compose will become more reliable. Perhaps in a
> > distant future, we can make the new system do everything
>
> Indeed, this is basically the investigation I've proposed. I also think
>
> > I don't know what the system would look like exactly, but you could
> > imagine things like:
> >
> >  * Composed of several micro-composes (micro-compose-services?) to
> > avoid blocking on everything completing successfully.
> >
> >  * Able to do speculative composes for CI
> >
> >  * Either x86_64-only, or with decoupled architectures so that we can
> > throw x86_64 hardware (or cloud resources) at it, and make it super
> > fast.
> >
> >  * No IO /mnt/koji during the compose - having a big network share be
> > central to the process creates a performance bottleneck, makes it hard
> > to move to the cloud, and potentially adds a lot of "noise" to
> > figuring out what is going on where things are slow because of some
> > other entirely different thing is goin gon.
> >
> > Add your own bullet points :-)
>
> I would like to redefine a couple working assumptions:
>
> * Big tools are unwieldy and inevitably silo knowledge. The people
> behind them are often smart, hard-working, and care about great
> results. But bedrock FOSS principles say we get more value from
> rapidly iterating tools to which many people can/do contribute. We
> should see if we can avoid big tools that solve everything.
>
> * Reproducibility is something we can better enforce at development
> time than use time. It's pretty easy to pick one or more git heads at
> a certain time (for a tool, a containerized environment, etc.). Let's
> not get one hand tied behind our back at the outset via outmoded
> assumptions.

That is not entirely true. A level of reproducibility is also at build
time based on versions of other packages that the package has been
built against. The versions of components that another component is
built/composed against will greatly affect the reproducibility of a
component and that information is not in git.

Peter
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux