How can a company help, officially?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Tue, 12 Apr 2011, Les Mikesell wrote:

> But Johnny's postings seem pretty insistent on never releasing the
> actual scripts in a form that can be used elsewhere or by anyone outside
> the project, so maybe a more productive approach would be some way of

oh horse puckey, troll -- it is just 'shoulder to the wheel, 
nose to the grindstone' repetitive work, some of which can be 
automated with minimal scripting [I have scripts that 'read' 
buildlogs for perl modules, and for R packages, that 'tell' me 
what to solve next, for example; most people setting up 
automated builders use (at first) a grep for 'is needed by' to 
identify what needs to populate a build chroot for a given 
target package]

Many of the tools one uses are simple scripts that are 
'throwaways' written to investigate something that looks like 
it may be 'sketchy'  Upstream's Fedora project has a partial 
set in its 'koji' project, but 'Release Engineering' still 
needs to 'look in' and tweak a build sometimes.  'Solving' 
circular dependencies is another issue where a couple of 
'manual builds' are needed to solve missing BR's -- I 
mentioned 'valgrind-devel' and 'openpmi-devel' a while ago in 
the upstream's '6' SRPMS ... there are others and always have 
been, and always will be (probably) because APIs change over 
time, and packages get renamed and re-organized

off the top of my head, here is the meta-code

1) acquire a pile of SRPMS; set up lots of drive space, and a 
secure location for one or more machines in a builder farm

2) optionally sort into a build sequence based on BuildReq's, 
or just look at an install tset (doing the reverse lookup from 
package to parent SRPM) of a desired end install state

The '2500 possible paths' complaint earlier in the week makes 
it clear that the complainer had simply not gotten this far

3) do an initial build pass to see if self-hosting can be 
attained [it won't in the case of a new major release; it may 
in the case of updates]; this should be done in clean chroots 
for each package, whether via mock, or other mechanisms

The initial chroot package list in the chroot is almost 
irrelevant, as 'Nico' kept whining about, because it will 
become clear during the build from the buildlogs, if one is 
reverse-engineering the build environment of another, what 
they 'assumed' would be present from build failure messages 
and the BuildRequirements.  This environment may not be 
static, because BR's shift around as packages are re-organized 
from other archives

4) supplement the initial batch of SRPMs with 'bootstrap' 
SRPMs / binary archive from 'nearby' versions of CentOS, 
Fedora, or local packaging

5) attain an initial closure on the rebuild; then 'prove' that 
it is 'self-hosting' by rebuilding it all again in a second 
pass with the binary fruit from an earlier pass [this step 
should be optionally repeatable over and over again at any 
later step, and it is a problem if it is not]

6) fork, tine a: if replicating another's archive, continue 
with the rest of the SRPMs in the set until all are built; if 
building a local custom distribution, add a 'local packagings' 
SRPM archive, back at step 1

7) fork, tine b: examine each package, possibly looking at 
prior efforts (ie, prior needs for patches) to apply trademark 
elidement, and branding changes; re-submit into the 
buildsystem with patches; if building for local use only, this 
step can be as formal or as casual as one desires

8) continue using 'diff's' on package contents between a 
'master' and a 'candidate' with an eye to 'binary 
compatability' differences [output from 'ldd' is important 
here], decide if they are material, and identify 'why' it 
varies; as needed, re-submit into the buildsystem with 
patches, or changes to the packages comprising the build 
environment or the 'options' passed in as discerned from 
reading the 'spec' file conditionals

9) barrier synchronization point of the fork tines: when all 
builds complete, and all patches are built to one's 
satisfaction, attend to 'installer' [anaconda] patches, and 
build ISOs

Fixing anaconda up will usually necessitate additional rounds 
of builds, because the installer is a continuously moving 
target.  Later build rounds are already using prior round 
packages for chroot building, so closure should not be an 
issue

10) attend to securely signing the packages -- air-gaps are 
nice

[11) KB has mentioned automated testing --- it would drop in 
here, once installable images are available in the case of a 
new major release; signing, and testing can move earlier in 
the process for minor release updates]

12) stage the packages and images to the master archive, and 
to a mirroring master; manage the release announcements, and 
'flip the bit', etc

-- Russ herrold
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux