Re: F20 System Wide Change: Perl 5.18

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/07/2013 10:56 AM, Marcela Mašláňová wrote:
On 08/07/2013 03:01 AM, Dennis Gilmore wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Tue, 06 Aug 2013 13:39:31 -0700
Adam Williamson <awilliam@xxxxxxxxxx> wrote:

On Thu, 2013-06-13 at 11:10 -0600, Kevin Fenzi wrote:
On Thu, 13 Jun 2013 13:23:51 +0000 (UTC)
Petr Pisar <ppisar@xxxxxxxxxx> wrote:

On 2013-06-12, Kevin Fenzi <kevin@xxxxxxxxx> wrote:
So, there's nothing preventing the side tag and rebuild anytime
now right? 5.18.0 is out, so we could start that work in
rawhide?=20

Currently 5.18.0 does not pass one test when running in mock and
koji. (It's because of the terminal usage in tested perl
debugger.) We think we could have solved this issue in a few days.

Cool.

Could you explain how the side tag inheritance works? It inherits
everything from rawhide, even builds made after the side tag
creation,

yes.

except packages whose builds have been already made in the side
tag. Am I right? That means we still get fresh third-party
dependencies from rawhide.

yes. However, there's are several downsides:

- Each side tag adds newrepo tasks which increases load a lot.
- If you rebuild perl-foo-1.0-1 in the side tag against the new
perl, then the maintainer has to fix something in rawhide, they
would build perl-foo-1.0-2 in rawhide and when the side tag was
merged back over either everyone would get the older one with the
bug, or the newer one against the old perl. So, it's really
important to not take a long time using a side tag to avoid this
problem as much as possible.

Seems like this one came true in practice. It seems like a 5.18
rebuild run was done in a side tag and then merged back into Rawhide.
Unfortunately, quite a lot of the 5.18 rebuilds seem to have been done
prior to the general F20 mass rebuild - so the mass rebuild won out,
and effectively squelched the perl rebuild.

The f20-perl tag was merged back before the mass rebuild was started.
so everything in the mass rebuild was built against the new perl.
however because the perl rebuild was at a week there was quite a few
packages rebuilt against the old perl. we need to work out how to build
perl quicker. your analysis is not really correct.

Dennis
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)

iEYEARECAAYFAlIBnHcACgkQkSxm47BaWfeWKgCeKSTmyV0Yrn9oulqe3UfCgEFH
ZxgAn3vU40FXlBwcxg7hRpyx40OeGdVN
=fMp/
-----END PGP SIGNATURE-----

If someone knows about a tool, which can give build order faster than
Petr's tool, then it would help ;-)

Marcela

I've been working on improving one of Seth's last scripts call buildorder that he wrote to order a set or SRPMS[1].

The current version is far from complete and/or bugfree yet, but i've attached the current version to this email.

It's really simple: You give it a set of srpms and it spits out the buildorder for it as precisely as it can get it.

The improvements i made was to get rid of the topology sort module he had been using and replaced it with my own implementation which also automatically breaks build loops at spots that are likely to be most beneficial to be broken. It does grouping as well, just like Seth's original version, but the actual group output is commented out at the moment as i'm using it for sequentially rebuilding things, not in parallel, so groups don't matter for my use case. If you want groups simply remove the comment from the

# print "Group %d (%d reqs)" % (group, minreqs)

line.

It also fixes an issue with the global macro definitions which didn't get cleared.

I've also added the manual provides specified in the specfiles, that gave a bit better dependency completion.

Last (and least) there is some commented out code it in where you can potentially use this code to also find the full buildchain for a specific package, basically allowing you to specify 1 or a few srpms you'd like to build and the script would spit out a list of all srpms you'll need to build for those packages in the necessary order.

Just like Seth's script it has a few obviously problems as for really correct build ordering you need to flip-flop between srpms/specfiles and binary rpms, but that will need a bit of a different approach. As it stands right now some of the dependencies it can't detect and resolve are build time generated ones, which might a showstopper for some uses of it.

I've tested the version i have here with several complete past Fedora trees, for texlive and KDE rebuilds and updates and for all the cases the output look pretty sane.

Hope this helps,

Thanks & regards, Phil

--
Philipp Knirsch              | Tel.:  +49-711-96437-470
Manager Core Services        | Fax.:  +49-711-96437-111
Red Hat GmbH                 | Email: Phil Knirsch <pknirsch@xxxxxxxxxx>
Wankelstrasse 5              | Web:   http://www.redhat.com/
D-70563 Stuttgart, Germany
#!/usr/bin/python -tt
# skvidal at fedoraproject.org & pknirsch at redhat.com

# take a set of srpms
# resolve their buildreqs (pkg name only and virtual build provides)
# sort them
# break them into groups for parallelizing the build
# Very simple implementation of a topology sort using filtering mechanism
# based on # of requirements and leaf nodes

import sys
import tempfile
import rpm
import subprocess
import os
import glob
import yum
import shutil

def return_binary_pkgs_from_srpm(srpmfn):
    mydir = tempfile.mkdtemp()
    binary_pkgs = []
    rc = subprocess.Popen(['rpm2cpio', srpmfn],stdout=subprocess.PIPE)
    cs = subprocess.Popen(['cpio', '--quiet', '-i', '*.spec'], cwd=mydir,
                          stdin=rc.stdout, stdout=subprocess.PIPE, stderr=open('/dev/null', 'w'))
    output = cs.communicate()[0]
    specs = glob.glob(mydir + '/*.spec')
    if not specs:
        shutil.rmtree(mydir)
        return binary_pkgs
    rpm.reloadConfig()
    try:
        spkg = rpm.spec(specs[0])
        for p in spkg.packages:
            binary_pkgs.append(p.header['name'])
            binary_pkgs.extend(p.header['provides'])
        shutil.rmtree(mydir)
        return binary_pkgs
    except:
        shutil.rmtree(mydir)
        return []

def get_buildreqs(srpms):
    my = yum.YumBase()
    my.preconf.init_plugins=False
    my.setCacheDir()
    build_reqs = {}
    build_bin = {}
    srpms_to_pkgs = {}

    for i in srpms:
        # generate the list of binpkgs the srpms create 
        srpm_short = os.path.basename(i)
        build_bin[srpm_short] = return_binary_pkgs_from_srpm(i)

        # generate the list of provides in the repos we know about from those binpkgs (if any)
        p_names = []
        for name in build_bin[srpm_short]:
            providers = my.pkgSack.searchNevra(name=name)
            if providers:
                p_names.extend(providers[0].provides_names)
        build_bin[srpm_short].extend(p_names)

    for i in srpms:
        # go through each srpm and take its buildrequires and resolve them out to one of other
        # srpms, if possible using the build_bin list we just generated
        # toss out any pkg which doesn't map back - this only does requires NAMES - not versions
        # so don't go getting picky about versioning here.
        lp = yum.packages.YumLocalPackage(filename=i)
        srpm_short = os.path.basename(i)
        # setup the build_reqs
        build_reqs[srpm_short] = set([])
        srpms_to_pkgs[srpm_short] = lp
        for r in lp.requires_names:
            for srpm in build_bin:
                if r in build_bin[srpm]:
                    build_reqs[srpm_short].add(srpm)

    return build_reqs

def main():
    if len(sys.argv) < 2:
        print 'usage: buildorder.py srpm1 srpm2 srpm3'
        sys.exit(1)
    
    srpms = sys.argv[1:]
    print 'Sorting %s srpms' % len(srpms)

    print 'Getting build reqs'
    build_reqs = get_buildreqs(srpms)

    print 'Breaking loops, brutally'
    # need to break any loops up before we pass it to the tsort
    # we warn and nuke the loop - tough noogies - you shouldn't have them anyway
    broken_loop_unsorted = {}
    for (node,reqs) in build_reqs.items():
        broken_loop_unsorted[node] = set()
        for p in reqs:
            if node in build_reqs[p]:
                print 'WARNING: loop: %s and %s' % (node, p)
            else:
                broken_loop_unsorted[node].add(p)

    build_reqs = broken_loop_unsorted

#    new_reqs = {}
#    new_reqs['bzip2-1.0.3-6.el5_5.src.rpm'] = build_reqs['bzip2-1.0.3-6.el5_5.src.rpm']
#    modified = True
#    while modified:
#        modified = False
#        for (node,reqs) in new_reqs.items():
#            for r in reqs:
#                if r in build_reqs.keys() and r not in new_reqs.keys():
#                    modified = True
#                    new_reqs[r] = build_reqs[r]
#
#    build_reqs = new_reqs

    # very simple topology sort algorithm:
    #  - Calculate # of requirements of each component
    #  - List all components that have 0 requires all at once,
    #    otherwise pick a suitable (is required by one of the low requires components) candidate
    #  - Remove all components and their respective requires in all other components 
    group = 0
    while len(build_reqs.keys()) > 0:
        lenreqs = {}
        for (node,reqs) in build_reqs.items():
            lenreqs.setdefault(len(reqs), []).append(node)
        group += 1
        minreqs = min(lenreqs.keys())
#        print "Group %d (%d reqs)" % (group, minreqs)
        if minreqs == 0:
            nodelist = lenreqs[minreqs]
        else:
            nodes = []
            for r1 in sorted(lenreqs.keys()):
                reverse_req = {}
                max = [0, None]
                nodes.extend(lenreqs[r1])
                for n1 in nodes:
                    for req in build_reqs[n1]:
                        reverse_req.setdefault(req, 0)
                        reverse_req[req] += 1
                        if reverse_req[req] > max[0]:
                            max = [reverse_req[req], req]
                if max[0] > 0:
                    nodelist = [max[1],]
                    break
        for node in nodelist:
            print node
            del build_reqs[node]
            for (tnode,treqs) in build_reqs.items():
                treqs.discard(node)
#        print

if __name__ == '__main__':
    main()
-- 
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/devel
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux