On Wed, 2018-06-13 at 14:28 +0100, Daniel P. Berrangé wrote: > On Wed, Jun 13, 2018 at 02:58:55PM +0200, Andrea Bolognani wrote: > > Pretty much exactly how I've created the images you can find on > > Docker Hub, except for > > > > > RUN mkdir /build > > > WORKDIR /build > > > > this bit, which AFAICT is entirely unnecessary. > > WORKDIR /build, avoids the need for the '-w /build' arg in > the .travis.yml docker run command. I think there's a slight > plus to having the workdir set automatically, avoiding the > need for a -w arg. The images we're creating are basically generic OS images with a few extra packages baked in, so tying them to *one* of the things we're going to use them for (although arguably the main one) this way is wrong IMHO: providing the -w argument at run time is much cleaner. Put it another way, if for whatever reason we decided to change the working directory at some point in the feature, with your approach we would have to post patches to two different projects rather than a single one. > > The one thing I haven't quite figured out yet is where to store > > the resulting Dockerfiles. If we committed them to some repository > > we could take advantage of Docker Hub's autobuild support, which > > would be pretty neat; on the other hand, being generated content, > > they have no business being committed, plus it would be tricky to > > ensure the generated files are always in sync with the source > > mappings without introducing a bunch of scaffoling to the > > libvirt-jenkins-ci repository. > > I think we should just have the dockerfile templates (ie with > the ::PACKAGE:: placeholder) in the libvirt-jenkins-ci repo. > We don't need to store the expanded dockerfile. Then we can have > a CI job somewhere that automatically rebuilds the & uploads new > docker images whenever a change is pushed to libvirt-jenkins-ci. That means rolling our own autobuild pipeline instead of taking advantage of Docker Hub's: we'd have to make sure we don't kick off builds unless the list of packages has actually changed, have a separate Docker Hub account with write permissions to the organization, actually run those builds somewhere... Not saying it's totally out of the question, just pointing out the hurdles and wondering if that's really the best way forward. What about generating the Dockerfiles manually and committing them to libvirt.git every now and then as the need arises? That's basically what we're doing at the moment to keep the list of packages in .travis.yml synced with libvirt-jenkins-ci.git, and while not perfect it's been serving us reasonably well so far... We could then hook up Docker Hub to perform container builds when the Dockerfiles in libvirt.git change. > > I don't mind having several images and using the tag only for the > > version number, if that's something that will make the result look > > less alien to Docker users; however, I think we should keep the > > names consistent with what we use on our CentOS CI, so it would be > > ubuntu:18 instead of ubuntu:18.04. > > NB that is ambiguous as Ubuntu does two releases a year, 18.04 and > 18.10 It's okay for us, because we only care about LTS Ubuntu releases anyway so there's no ambiguity. We already use that naming scheme in the libvirt-jenkins-ci repository and I'd really rather remain consistent. -- Andrea Bolognani / Red Hat / Virtualization -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list